Source Digitimes
This is the first time I've heard of the GTX500 series, looks like we're set for a good ole fashioned graphics card hoedown.
-
There were some reports of the GTX 580 a few days ago but still awaiting confirmation on the specs. Both AMD 6xxx series and the nVidia 5xx series are just incremental step ups from the current series.
-
do i see cards with 1000W draw power under load? I can't imagine how much more power will these cards consume... and not to mention how hot they'll get.. maybe 100C++ under load?
-
Meaker@Sager Company Representative
No I think now that fermi is king of the compute market they are taking the same approach as the GTX460 and scaling it up.
-
http://www.electronista.com/articles/10/06/18/gf104.could.replace.most.nvidia.hardware/
When it goes on market, I would expect the same power consumption worst case, but with any luck 10-15% better at the least. -
look at this.. sums up my opinion of ferni.
YouTube - Hitler reacts to Nvidia Fermi Benchmarks -
What the hell is up with all these graphics fanboys?
Support whoever actually innovates the industry and pushes it forward. It doesn't matter what field it is. Innovation is a good thing, and one that we should support.
So in short, I don't give a crap about either the ATI 6XXX series or the GTX580. Both will be on par with each other in performance and in price, and both are minute incremental upgrades over the current graphics cards and don't warrant upgrading to. What excites is whatever the hell Maxwell and Kepler are, and whatever ATI is doing to compete with them. Whichever of these actually pushes the industry forward is what I'm interested in. -
Per watt the new HD6800s outperforms GF104 based GTX 460 by up to 15%, while using only about 5% more silicon than GF106 to produce. -
I see up to a 9% difference, and people that complain about power consumption with their 1KW PSU's make me laugh. -
Are you going to make a 4th edit? (I guess you were)
That's total power consumption not perf/watt.
http://www.techpowerup.com/reviews/HIS/Radeon_HD_6850/29.html -
-
For a long time now, I prefered ATI cards.
But I'm growing fond of Nvidia precisely because of Fermi.
Say what you want, but atleast Fermi is innovative. Expensive yes, buggy yes, risky yes. Every single significant innovation in history was expensive, buggy, and risky at first. That's a fact of life.
But we should encourage innovation. Because it pushes technology forward. Kepler and Maxwell also seem like they will be very innovative.
Where as I haven't seen much innovation coming from ATI as of late.
I'm not looking to buy a card anytime soon, so I can wait for the bugs to be worked out, I can wait for Kepler. But I hope that ATI has some awesome innovations of their own that they're planning to implement into their next gen cards. And I'm fairly confident that they do and are just being secreative about it, otherwise, they will lose out in the long term. -
AMD with 10.10 have already begun to release drivers packaged with OpenCL.
AMD APP will be used in their fusion processors with a Phenom II+HD5xxx in one. This will allow netbook with a CPU/GPU combined TDP of 25 Watts to have all the GPGPU support you would want in a netbook, even playing DX11 games with tessellation on.
The AMD APP with HD5970 was proven to be faster at some calcuations than even a $10,000 Tesla card. Password recover massive amounts of calculations.
AMD is just as innovative as Nvidia. Just because one company doesn't market their junk by forcing every reviewer to regurgitate their marketing blurb continuously doesn't make them more innovative.
An AMD FirePro is 80% of a Quadro FERMI and actually faster in real time rendering. $1,500 vs $5,000. Two FirePro still $2,000 less than a Quadro blows it away.
Two FirePro for less than one FERMI Quadro = AWESOME 12 DISPLAY EYEFINITY Setup!
How can you forget that with FERMI, Nvidia just runs around screaming tessellation is awesome. AMD has had tessellation units since HD3xxx cards and they first implemented it in the Xbox 360 nearly half a decade ago... -
-
Firepro's have bugs in workstation apps. Half price doesn't make them half worth the bother.
-
Why don't you post some benchmarks that involve the acutal applications the Tesla is desiged for instead of from some neat application that has no enterprise value. I would also suggest next time pulling a benchmark from some place other than the seller's own product description.
http://igogadget.com/2010/03/16/recover-wi-fi-and-iphone-passwords-using-ati-video-cards/
-
Can never argue with Nvidia fanboys, lols. Yay, Quadro are godly, FirePro are bugged and terrible. nVidia are amazing. Whatever.
-
-
And in real world situation, the data I presented wasn't irrelevant. Password recovery may not be relevant for you, but for much of corporate and anyone interested in security, it's huge. Gimme a break, just because it doesn't suit your needs for processing 32 hours of CGI, it's not relevant.
I must remember, what you deem to be important is only relevant and everything is just irrelevant data. Thank goodness for ignore feature. I don't mind having a discussion, but this is nothing close to a conversation at all, just plain insulting and derogatory. -
Why do people keep assuming that the GTX 480m runs hot?? It is definitely cooler than my 5870m. Now that card runs hot considering it uses less wattage.
-
Anyway, the 6970 will probably have "20%" on the 5870 and the cycle continues. Shame the 5xx fermi won't be any cheaper, though. -
Okay people. FERMI does not run hot just because. How hot it runs depends on the cooling system. FERMI draws much more power. That's it.
As for innovation, I have to give it to AMD. If you study anything regarding administration of technology etc, you would learn how to properly apply innovation. Innovation is not only about inventing or pushing forward. It's about making money. Proven on the market.
A new technology can't be innovative if it hasn't been succesful on the market.
Andyways 20% increase over 480 is nothing, and it's probably inline with the supposed increase to 512 cores.
We will have to wait until next get cards to get massive increases.
Now keep on fighting. I'll go back to play games. -
Also note that silicon boils at about 105C. No card can run that hot without almost immediately damaging itself.
Please.
The GTX 580 is supposed to be based off of GF104, which is a refinement of the Fermi architecture (GF106 and GF108 were cut down versions of GF104) that was designed to use less power and be more efficient. GF110 is in the works as well but there is not much information about it yet.
GF104 was the fix. Or at least a significant enough fix that a GTX 460 is on-par or more powerful than a GTX 275 while drawing even less power.
Not entirely sure about GF106 or GF108 but bottom line is that they are an improvement over what they replaced while drawing around the same amount of power. -
Tried and true designs are more likely to succeed. Radical alterations on what people are used to are far more likely to never see wide adoption. Yet, without such departures, things would never get better.
Some of the greatest innovations of all time failed, often because they were ahead of their time.
And the technologies and ideas they pioneered often end up succeeding much much later...
Examples...
Betamax
The Segway
LaserDisc
The Newton
And every single thing on this list...
Nine Technologies Ahead of Their Time - PCWorld -
They've also got 64KB "cache" per SM (48KB L1 cache with 16KB shared memory, or 16KB/48KB). ATI does not have anything that configurable. This, for high performance computing, is a big deal to me.
nVidia also has ECC support built in to their higher end products, and the double precision performance is at least as good as ATI's.
I cannot remember the GFLOPS ratings for ATI's cards right now but the Tesla units perform at a 1:2 ratio for double:single precision floating point math. It was 1:8 for last gen's nVidia cards.
You buy $10,000 Tesla cards because you want ultimate reliability. Memory runs at about 1.1GHz instead of ~1.4GHz to ensure stability for longer periods of time (what GTX480 card is rated to run 24-7 for 3-5 years straight?), cores are downclocked slightly to again ensure longer term reliability, ECC support (needed for real scientific computing), two DMA controllers (more efficient memory transfers while the GPU is busy), and that 1:2 ratio on FP performance (again). Those $10,000 cards are still cheaper than building full blown high performance clusters...*sometimes*.
-
Wow, learned a ton from your post Greg. I will admit there are definitely advantages to FERMI, but... Of course I know that with a budget price of $1,500 it likely won't have ECC memory modules. But $1,500 fastest 147 GB/S memory bandwidth, exceeding even the Quadro 6000, and 2.4TFlops single precision, 532GFLOPS of double precision, that is just as innovative from AMD in their own manner. The Tesla C2050 is 1.03 TFlops Single Precision and 515GFLOPS double precision. Can't comment on AMD drivers, but seems AMD is well aware of their reputation and hopefully that issue will no longer be a deciding point for purchasing. And I'm sure when demand for FirePro grows, ECC memory will be added if AMD chooses to compete at $5,000-$10,000 for their GPU.
- In my defense for response on innovation, I think providing that kind of performance for $1,500 for anyone who's data may not be as critical as yours, where a system using ECC memory modules would suffice, this is very innovative. Something Nvidia is unable or uninterested in providing.
Anyways, sounds like you do some serious computing Ross! -
The media and almost everyone use the word innovative for just about everything. The ammount of technology, inventions and patents on work that have radical new approaches are so overwhelmingly huge in quantity that the word innovative would loose its entire meaning. Which is why the word innovative is usually used to describe ideas with actual market potential.
If you have a great idea, but it tanked with no success on the market, it was a probably a great invention ahead of it's time, but it wasn't innovative as it never reached its potential.
Design and innovation management management - Guiding principles of good design -
All I want to know is when this GPU is going to be released?
-
Is notebookcheck correct when they claim a 480m SLI setup actually consumes less power than the equivalent Crossfire 5870m? Funny since the single cards are the exact opposite in power consumption ratings.
Back on topic: Early days yet, it sounds good that Nvidia are announcing the counter to the AMD 6xxx series, but saying it's 20% better than the 480m right off the bat smacks of hyperbole. -
It's purely a HPC/desktop GPU and even chopped down and underclocked like the GF100 based GTX 480M, Nvidia is better off rebranding GF104 for a mobile GTX 580M....especially when you consider Nvidia is making little to no profit on each GF104 in the desktop market but think they can charge a $395 premium over GF106 for scrap GF104 parts as GTX 470M.
The rumored GF112/114 parts have a somewhat better chance of making it to some kind of mobile form but so far the rumors on them suggest only an extra 64bit on the memory bus over a fully enabled GF104. That's only an improvement for the mobile market if downclocked GF104 are bandwidth limited. -
I consider this announcement of the 580 as a 100% diversionary tactic. Nvidia isn't even close to a new lineup, they just finished their 450 release within the past few weeks. AMD has had their 5xxx series out for over a year, it's mature and in need of a refresh. Now that AMD is ready to move on to the next generation, Nvidia wants to keep themselves relevant.
{edit} What am I saying!!! We all know Nvidia never comes out with new products more than twice a decade. I'm sure they already have the 580 & 680 series stickers ready to slap on their current products to keep things "fresh" as they concentrate on the Tegra platform. -
-
Unlike AMD/ATI, Nvidia has been diversifying into different sectors in the market that will grow their overall lineup and in the long run, make them a better, bigger company. AMD bought ATI for a reason. They wanted to diversify into a new sector instead of relying on their rivals for GPU solutions.
Nvidia is no different. Their road plan is capturing the professional market even more and make a statement in the expanding, competitive cell phone market. Also, we know they're contending against Intel in the wireless display market as well...that should be fun to watch. -
GapItLykAMaori Notebook Evangelist
-
Nvidia has watched their overall market share decline over the past few years. All this while still holding the larger share of the professional/business (Quadro/FirePro) market segment. ATI has made all it's gains on the Radeon product. Sure Nvidia will come back, but you can't deny the history of the past 3 years. Nvidia sat on their hands counting on the G80/G92 to carry them for the next 3 years until they released their Savior: Fermi. And you're right, ATI didn't change much... but they didn't have to. -
-
-
-
Suddenly, the GTX460 don't seem that awesome anymore huh. Let's see a GTX 560 come around quickly to lower prices even more! -
The GTX 460 is still the best deal, there have been great deals from $100-200, while the HD 6850 is new and hence still $200+.
-
Really thankful for AMD releasing the 6xxx lineup.
Means there will be plenty of used GTX 460s on ebay soon enough. I'll get one for dirt cheap to slap in SLI with my current one and enjoy a nice jump in performance.
<3 competition. Everyone wins. -
GapItLykAMaori Notebook Evangelist
-
$5.4billion will take a company like AMD awhile to recover from.
People assume just because AMD did very well with the 5xxx series that they're suddenly in the clear. Not even close. -
-
Let the rebranding begin.
GT 555M spotted in Nvidia 261.00 Drivers -
How do you know it's a rebrand? If you mean by reusing the architecture, ATI is using the same architecture for their 6xxx series, though they have refined their shader complexity, nVidia did something similar from their GF100 to GF104.
-
Because it takes a lot longer than a month and a half to make those kinds of arch changes.
-
-
No, G92 -> GF100 is the arch change, the GF104 is a refined version of the GF100.
-
The more likely scenario is that these are respins of the existing arch to fix yield problems so they get more functional 384 core GF104 and 512 core GF100, that could likely also hold higher clocks more effectively. All silicon goes through respins and it's nothing that deserves a new generation of model numbers. At most it deserves a b-suffix at the end of the GPU codename: GF100b, GF104b etc.
Nvidia announces GTX580- +20% over GTX480
Discussion in 'Gaming (Software and Graphics Cards)' started by Lozz, Oct 21, 2010.