Specifications
It says next generation ATi. Meaning is this the 6900 series?
-
The Forerunner Notebook Virtuoso
-
Seems highly plausible. I have recently seen various articles regarding Mobile Radeon 6000 series yields/specs/prices, so it would make sense.
-
The Mobility 6900 will be, in all likelihood, announced at CES. There is also rumour that the 580M/485M is around the corner, possibly waiting for CES as well.
EUROCOM Cheetah 3.0
17.3-inch Full HD; 1920-by-1080 pixels; Glossy or Matte; optional 3D
Still 16:9 ... I call that bull** ! Too bad the Dell Precissions are soo expensive. -
I look forward to a new GPU that is reasonably priced instead of the overpriced GTX 470M. I am looking to get a new GPU so will keep a watch on this news.
Edit:
Just noticed the link is Eurocom which is about as reliable a source as a brick wall. -
You can already see leaked benchmark results of 6970M Crossfire for the M17x-R3 in the Alienware forum. It exists and is coming very soon.
-
We've already discussed those benchmarks in another thread on this forum and they are not very reliable. The problem is the CPU score which seems to be tweaked. A Mobile 2720QM scoring as high as Desktop i7-975 raises a lot of questions.
Even so, if that is a single card score then the Mobility 6970 should be about 35% faster than the 5870. Our initial estimates were for around 20-25%. -
That's definitely not a single 6970M, it's Crossfire. The only skewed result in those benchmarks is the one showing the Intel HD graphics which is likely just a bug that optimus based systems experience (M11x has that issue too). Otherwise the scores are consistent with what is expected of Sandy Bridge and 6970M Crossfire.
Edit: The 2720qm turbo boosts to 3.3 GHz which puts it on par with current 920xm CPU's. The 3dmark06 results, when taking into account greater CPU efficiency along with faster GPU's is pretty close to being on the mark. With a 920xm equipped Crossfire system, the CPU score is not far behind at those clocks. 3dmark06 doesn't utilize all 4 cores efficiently like Vantage does and the CPU frequently turbo boosts well above 2.2 GHz. Take a look at what my overclocked 940xm was achieving: http://forum.notebookreview.com/att...official-m17x-benchmark-thread-part-3-23k.jpg
Granted the TDP is ramped up with the 940xm but considering that SB will have TB that can exceed TDP as long as the cooling permits, 20k is easy to get. -
Not really, SB is just 10% faster clock to clock than the previous generation. So all the extra performance you can get is from the smaller manufacturing process which allows for higher clocks. This means that just as you said, the 2720qm will perform similar to the 940XM. However, the 940XM has a score of around 4000 points in 3dmark, far less than the 5600 CPU score in that benchmark. The 2720QM will score 5600 only if all 4 cores will be running at 3.3 GHz (maximum Turbo Boost). I am not entirely sure if this is possible with Turbo Boost 2.0 but if it is, it is only for very brief periods of time.
Now taking into consideration that 3dmark06 is very sensitive to CPU performance, then the 20k score is not very reliable for measuring performance. Which is what I've been trying to point out here. -
It is CPU dependent but a faster GPU also pushes up the final CPU score. This is easily proven by using the same CPU in an M17x and disabling one of the GPU's, the final CPU score also drops. Thus a 20k score with a 45W TDP (with TB 2.0) 2720m + 6970m crossfire is entirely plausible.
Edit: Look at this score: http://img41.imageshack.us/img41/4146/3dmark0622335.jpg When he ran that benchmark, Throttlestop had not been released yet thus all he did was use the bios BCLK overclock + eVGA (though this was pointless due to TDP limitations). Now pay attention to his CPU score. 4546 points with just a 5% BCLK increase (which again with TDP limitations is largely useless in 3dmark06). I'm confident those posted scores are entirely legit. -
I didn't say it's not plausible. Given that with some OC a 5870M Crossfire setup can be pushed way above 20k then a 6970M should easily reach and go beyond 20k.
What I am saying is that it is not reliable for comparing performance. There are too many unknown elements to draw a solid conclusion.
Are you sure this is a Crossfire setup ? Any link ? -
Well given my experience with Crossfire setups, 23k-24k is the absolute max a 5870m Crossfire system can reach. Thus unless AMD drastically increased single GPU performance, 20k for a single GPU is not realistic. -
According to Eurocom things are new:
- four RAM memory slots means 8GB of cheap memory and max 32GB!
- 2 x USB 3.0 ports which is one more (total five USBs) compared to current gen
- AMD 69xx will be awesome considering AlienWare leaked benchmarks.
This laptop used to have 120W AC adapter and which is not enough for 940XM
and GTX 470M + Ocing + 5 USBs + other ports and periphials etc..not talking about 480M.
I expect the new W870CU to come with 150W+ TDP AC -
-
On the other hand,
If that score is for a Crossfire system, I would say that is definitely too low given that CPU score, unless we are talking about a 6850M Crossfire solution in which case it is fully possible.
As you can see there are so many variations that can lead to that score that's is simply impossible to tell anything about the performance of the card/cards tested there. -
Lol, it's Eurocom. How did their leaked prices for the 460M and 470M turn out?
That's all there is to say, for right now. -
Single 6900..yeah maybe if we were talking about a desktop. -
edit: reposting in a minute.
-
That's a very long minute...
-
I decided to not even get involved, with a debate over 3Dmark06 scores.
All that needs to be said it that the 6850 is a consistent minimum of 25% faster than the 5770 @ 1920x1200, and that advantage will carry over the the 5870M vs 6970M. -
ASUS Radeon HD 6850 Direct CU 1 GB Review - Page 29/32 | techPowerUp
PowerColor HD 6850 PCS+ 1 GB Review - Page 29/32 | techPowerUp
AMD Radeon HD 6800 Series Is the New Midrange Graphics Champ - PCWorld
EDIT: I just did some calculations, and Kevin actually has a point, it might be a single GPU but it has to have a slightly higher TDP than the 5770, say 70-75W.
I only disagree with him on one aspect, that 25% is not the minimum, but the maximum possible performance gain. -
-
Now figure for Blackcomb being full Barts XT at or around 80% of desktop HD6850's clocks.
When's the last time ATI/AMD used a cut-down GPU for mobile parts? -
For 6900M to be truly between Barts XT and Barts Pro, the transistor size has to considerably increase along with the die size. I saw the projected roadmap where it's supposed to be 256 bit as well. If you look at the full board power consumption of these 5870m's, they pull 65-75W already. I measured 5870M's power consumption when I had my kill-a-watt by alternating between active Crossfire and single card with furmark active and it varied from 65W-75W per card. If Blackcomb is going to have 700 more transistors, that's roughly a 40% increase which should follow with a similar power consumption increase. You're looking at 90W-105W per card. The M17x-R3 is supposed to actually be thinner than the M17x-R2 so unless Dell created some revolutionary cooling system (along with other ODM's like Asus) this is not going to fly. Yes I'm already aware TDP and power consumption aren't directly related but they do correlate very strongly.
Edit: Here's what Techpowerup said about Barts Pro and XT-
-
Barts die size is 255M^2...just bigger than the die size of GTX 460M/GF106 @ 240MM^2 but lots smaller than GTX 470M/GF104 @ 366mm^2.
HD6850 is rated at a Max Board Power of 127W while HD5770/Juniper is rated at 108W. Based on overclocking potential HD6870 has proven to be a Barts GPU pushed to it's limits while HD6850 enjoys plenty of overhead to reach the same clock speeds.
The major contributor to power draw is clock speeds not shaders so a desktop HD6870 at HD6850 speeds wouldn't draw much more power than the 127W and should be below the 151W MBP that HD6870 draws at stock speeds.
AMD should be just as capable of converting that power draw into a mobile TDP as Nvidia was converting the 150W GTX 460SE with it's larger die into the GTX 470M. -
-
Even a 20% increase, with headroom to OC, will be enough for me.
What % of the 6850 do you expect 6970M to cover at stock, 75 percent? -
In the end, a hypothetical 6970M @ 70W TDP based on the HD 6850 core gives me the following predictions:
A Vantage GPU score of around 8200-8300 points. With a Vantage P score of around 9000-9300 points with a processor let's say i7-2720QM. On stock, the card will probably brake 10k P points in Vantage but only with the top of the end Sandy Bridge CPU.
The predicted card also seems to perform just a bit better than the 480M.
If you want and I will have time, I will try later to do some predictions for a hypothetical 580M @ 75W TDP.
-
Blacky, I appreciate the substance you bring in your posts. But I hope to God that you are wrong. Anything short of a 15-20% advantage over the 470M will disappoint me, though we are pushing towards the ceiling, where physics come in to play.
And check this out, from the X7200 thread:
One of AMD or Nvidia are coming back with a > 75W GPU. Maybe the 580M is going to be a ~350 core GF104. Or is AMD unleashing more of the 6850's power?
Let the speculation begin. -
Hmm, a 95W AMD card based on the 6850 would get around 11000 points GPU score in Vantage. That basically puts it on part with the desktop 5770 or possibly even better.
But I am really don't like to make predictions of this sort. As you said, I would prefer to be wrong.
Still I will try to make some for the 580M, just out of curiosity. -
-
Anyway, I've tried to do some calculations based on the new GF110 architecture but everything I get is simply bad. In order to match the performance of the 6970M a hypothetical 580M needs about 5W more power/TDP. -
Mind if I go out on a limb, to say that the 580M is simply a 336 shader GF104? They wouldn't dare attempt another GF110 or GF100.
GTX 460 is 150W @ 675/1350/900. I don't see anyway how that ends up as a viable mobile chip, even at 100W. -
I know that we know better than to blindly trust Notebookcheck, but it's still worthwhile to note that they had this to say about the Radeon HD 6970M:
-
If true, AMD wins the top mobile GPU spot and we go back to the good days when the top GPU required the cooling only a 17" chassis or better could provide. :wink:
-
Of course, it will turn out being just another Notebookcheck inaccuracy, and we'll see the same 70-75W AMD GPU in the 15" and 17" machines. -
Then we'll have to wait for Wimbledon.
-
Or maybe I'm wrong, after all, the problem might not be 100W TDP but power requirement. Maybe that was the problem with the GTX 480M. We all agree that a card with good performance per watt, like GF104/GF114 would kick with 100W TDP. -
The only issue the GTX 480M had, was Nvidia's ridiculous pricing.
If Nvidia had released the 100W 480M @ $560, a 2x markup from the reference card, the TDP wouldn't have mattered as much, and the W880CU would've been a hit.
It didn't even need "extreme" cooling. The W880CU was just a W870CU with a larger back panel, and the chip had a special heatsink.
AMD can afford to price a 100W 6970M @ $450 to $500. No one would care, because the performance per watt per dollar would be amazing. -
Still, you need another back panel, a special heatsink, probably some other improvements to the power circuit, a new PSU. Those are all costs that manufacturers have to pay just to include a 100W GPU. I guess this is the reason why no manufacturer besides Clevo suppoted the GTX 480M.
I also don't like the way the W880CU handles the GTX 480M. Improving GPU cooling by decreasing CPU cooling isn't the way to go and I think Clevo understood that as well.
And in the end you can make more money with 75W cards as more manufactuers can use them, and that is where AMD can make money, leave the extreme high-end sector to Nvidia (although the GTX 480M was anything but high-end). -
Ignoring that it was a borked GPU, the GTX 480M wasn't really an attractive offering because it was expensive and it didn't offer enough performance over the Mob. HD5870 for the extra power it drew.
Clevo may have been the only company to pick up the GTX480M but Dell, HP, and Lenovo workstation notebooks each cool a 100W TDP GPU. Yes you'll always see fewer notebooks that will cool a 100W GPU, but you'll see more willing to take on the task if the perf/watt and perf/price is worth it to do so.
For those thinner notebooks that don't want to engineer cooling for a GPU with a 100W TDP....well that's why AMD and Nvidia make lower model cards.
New Sager with 69xx?
Discussion in 'Sager and Clevo' started by The Forerunner, Dec 24, 2010.