Short answer: Don't wait for 480Ms
Metro 2033's bad coding makes it noticeable, Very high settings are just not possible in the game on the M17x where as they should be a breeze on 480M's. I just can't see it happening without a heatsink or fan revision and being willing to sacrifice near 100C temps and or low supply due to having to stock low voltage or slower clocked cards. The GPU Heatsinks in the M17X R2 as well as just the overall design was not ment for GDDR5 75W+ TDP cards. They (Dell) just barely managed to squeeze 5870s in there and they're low voltage samples which still run @ 75-80degrees normally. I can't imagine what another 1GB of GDDR5 on the other side of the card and an overall more power hungry and heat generating design will do. There's room for improvement in the GPU HSF design in the R2, but not enough imo. Plus, Sandy bridge is looking to release a few months early, so maybe if we do get them, they'll update a few more components to allow the higher thermal demands 480M require. In either case, AW has always been slow to release refresh's, look how long it took them to get a 5870 in the R2, and then how much longer after that to get Crossfire working. If AW manages to make heatsinks out of liquid diamond, I still wouldn't expect them to appear in an R2 for 2 months at the least.
-
Well, if you guys follow the Clevo boards, -
they manage the 480M heat very well,...by making a nice copper sarcophagus for the card and providing an extra pipe to another fan to help cooling the monster. That way an amazingly low temps are achieved.
I wish we could have the same level of cooling for our radeons in AW machines. -
-
If they put a single 480m GTX in the M17x i can almost be certain it will be using both GPU fans to keep it cool
...
It would makes sense that way as well only going with a single card and needing 2 fans to keep the temps at bay...
But at the end of the day a single 480m is hardly anything to get excited over compared to 2x5870s other then the fact that a single card will have very little to no issues and many might be willing to sacrifice performance for stability ... I know i would.... -
You also won't have crossfire driver issues.
-
-
-
-
I agree it would make more sense but the 460m even times 2 wont sound as good as saying your machine has the 480m ...
Meh thats marketing for you... -
Thing is with the 460's is that then this wouldn't be "the most power laptop in the universe"
-
-
So is the 460m really only 10% less powerful then? If so then that's surely the perfect solution. I'd take em over my 5870's as i prefer nvidia.
-
-
Thats like saying you would take a 500 pound acne ridden brunette over a sexy slim blonde because you "prefer brunette".
Seriously, wake up man. Numbers count, not brands.
At the moment, while the 480m is the single most powerful card, two 5870's outperform it with higher speed, lower heat, and lower power draw.
Looking at numbers, it just doesnt make sense to use 460m sli in a laptop.
If you want to have "nvidia speeeeeeed" then get a DIY Vidock in this very forum, and plug in a gtx 480 into your laptop as you obviously dont care about any kind of mobility or battery life.
That would be the smarter alternative to 460m sli.
-Ash -
-
Physx and other nVidia goodies would be nice, but I really doubt that they would be released in a refresh if they don't improve performance much... -
but yeah, i'll get the 5870's for now, i'll just (in about ... 5, 10 years) get me a dream Desktop with some GeForce Beauties.
And some nice fans. lots of fans.
-
-
I wonder how temps of a single 480 would be in the m17x. The sager NP9285 seems to be doing well with the temperatures. On the other hand my single 5870 gets up to 87-88 degrees Ceclius while gaming. I have been a long time nvidia user. I've never had a computer with sli or xfire and with all the problems from what I've heard it I'd prefer to not go down that path.
-
-
The 5870's i have dont perform that much better and they do run hotter when overclocked and they flicker like buggery in most games, also the scaling isn't as consistant as the nvidias.
Generally i've found sli support to be better than crossfire and i also prefer the nvidia control panel over the stupid layout on ccc (took me ages to find scaling options).
This is why i personally prefer nvidia gpus... i've had a better experience with them so far.
If ati sorts out these cards then i'll happily keep em but if not then i'll consider a swap (if its ever an option).
EDIT: Also if you look the 460m is one of the coolest running cards available because its based on a revision not the original 480m tech. -
If you have flicker problems you should perhaps try my suggested fix in the 5870 crossfire problems thread.
-Ash -
Thank you sir =) i'll probably know more about the bugs and fixes of the laptop i'll get before i get it lol, but that's good. I actually spend more time tweaking and fixing bugs than i do playing the game lol.
Still no info on the 480M. I'm almost certain they'll make a new rig for it, and it could be some time i would imagine, no matter which way they go, trying to concentrate on new cooling, or on less power consumption. -
The right combination continues to be Intel chipset + Nvidia gpu's in SLI.
4xxx/5xxx series are doing better than in the past - and are not garbage, so no one should think that. But the Nvidia chipset in the R1 was only "ok"...barely. Hence most consider the R2 to be a sizeable improvement just because of the chipset changeover, but not necessarily the GPU changeover. SLI works just fine on I7's. Ask anyone who has a desktop chipset that uses SLI.
The 480M may be undergoing some tweaking, hence why we haven't seen much of the to date, or there could be design changeovers in the works to Sandy Bridge that may get coupled with 480M's. Don't know. Remember also that a 1GB 480M version should consume less power. And the TDP of the 5870 doesn't count the memory at all.
In the end they'll probably end up siblings, a checkbox on the machine to pick whichever you prefer. They are just taking a long time to show -
But unfortunately you are still dealing with double the heat output and power consumption.
The HD58x0 series are already at the reasonable limit of heat and power draw for a notebook. Anything more just doesnt really make sense in a notebook. Perhaps in a SFF gaming pc thats slightly larger than a nintendo Wii, I can see the 480m's being useful but...
The way I see it, the HD5xxx cards are perfect for notebooks (right now) due to their sheer efficiency, and nvidia is learning that lesson with their mobile parts.
I personally think we will see a desktop 460 derivative in a notebook part (remember they are re-engineered and cut down) which will be a lot more efficient and useful.
The current desktop 480 derived part is just silly in a notebook however.
If I were nvidia?, well the way I see it is they have one big problem that gets in the way of them being competitive in the notebook department - the "halo" stigma.
Nvidia tries their very best and sacrifices everything to be able to say "we have the fastest gpu!" but that means they cant conceivably and realistically produce a decent notebook GPU.
What they need to say is "we have the fastest desktop GPU and the fastest notebook GPU!" - they need TWO architectures.
From what I can recall, the g90 architecture was very decent and had great efficiency. What they need to do is take that general idea, and directx11ify that part, shrink the process and up the clockspeeds.
Its cuda compatible so it can do all the crazy tesselation they are on about too.
They really need to do two product lines because you cant fit a monster truck into a undercover parking bay. Just doesnt work.
/ramble.
-Ash -
3800M is a 100W part and they've been pretty successful. I don't think dual 85W parts are all that far from reality. That might be 20W more than dual 5870's but it's very doable and not all that "extreme" really. Would it be better if we had dual 470M's that did just 75W each? Probably it would. And I wouldn't disagree with that. But to think that a 5870 is the limit of what "should" be done in a notebook when that part is barely faster than a 285M is setting the bar too low.
Sadly that bar is primarily driven by power. The real secret to increasing graphic ability has simply been the rise of power used by GPU's. For every increase in ability, there has come an increase in power. Now I know, and believe, that a laptop operates under more stringent limits and really shouldn't exceed those as freely as they've been exceeded on the desktop. But small pushes to the limits are probably not a disaster as long as they are not too large.
Want a concept of how much more efficient the 480M is in reality? Clevo can drive a desktop CPU, 3 memory slots, and dual 480M's from a 300W psu, 60W more than the M17x. That is likely over the top in terms of increases, but it's not out of conception that you could drive SLI and a notebook CPU on a 260-270W budget. Again, higher than now, but not hugely higher.
Much like the 280 to the 285 series and many similar moves before it, I'd suspect that eventually Nvidia will get things under control, power and heat wise. But I don't buy that the 480M is as over the top as it's made out to be.
And if they can get the power down a bit more, it will be not just a good chip but a very, very good one. -
i would not even bother with the 480m, come on its based on the desktop 465. from rumors, the 465 is not even selling that well vs 460 now that its out. the performance between the desktop 465 and 460 is within 5% of each other. after its based on the GF100 core instead of the GF104(?) core.
the laptop version of the 460 should run much cooler, cheaper, and less power hungry. i wont be surprised if it also ran faster too.
so why bother with the 480m. -
what about ATI ? aren't they planning to release 5890/5970 mobility eventually ?
-
480M is the only card with Cuda, PhysX, and DX11.
That's what i care about, and that's not what ATi can do.
And Yes, it's already been released, you just have to upgrade it yourself. I don't think a mobile 5970 though, just a better 58xx but i could be wrong, i saw it somewhere, didn't take much notice -
-
i wanted to know whether ATI are releasing a new card soon, all you did is to tell me what you don't care about and threw in some vague unverifiable info. -
ATI 68xx architecture with be a refreshed and improved 58xx arch..
It was stated by ATI that their BIG new arch would hit when they could churn out 28nm and that producing it any higher wouldn't be profitable(or worth it)
So It is possible to see one but the 6870m will be based off probably the desktop 5830 ... one can never predict the mobile front . It is far more volatile market.
Sorry I cannot help more but sadly October is when it will be released and we can all say for certain. -
Let me ask the nvidia fanbois something.
Put yourselves in the shoes of a game or app developer.
Now, you are a budding developer, and you want to put some compute in your app. You want it to be profitable and accessible.
You have 3 compute languages to choose from.
1: Cuda
2: OpenCL
3: DX Compute
Number one ONLY works nvidia cards. Number two ONLY works on ATI cards (yet). Number three works on BOTH cards.
Holy crap! You would be a raving IDIOT to choose 1 or 2! Only half your users could use this feature if you chose to use either of them! Lucky you are smart, and you use DX Compute, because now EVERYONE can use it.
It also doesnt hurt to mention that CUDA is harder to program for than the programming for the PS3/Cell. Now thats saying something.
So, as a developer, you determine that BOTH cuda and opencl are stupid. They arent going to get much support, if at all from developers. People will use DX Compute instead.
Oh, did I also mention that CUDA is mainly targetted at linux servers? Are you running a linux server cluster? didnt think so.
Your next dilemna - Physics!!!! You need to get your Physics on! it will make your game look awesome!
So, again, you have three choices.
1: PhysX
2: OpenCL
3: DX Compute
Oh, we have been here before! we already know the right choice! Whats the one to choose where EVERYONE can see your hard work?
Thats right! DX Compute!!!
Wow... that makes it look like ATI and Nvidia wasted a lot of money making their own selfish solutions!
So, in a developers eyes (which, I should mention, I am one) physx/opencl/cuda is useless, and thats why ALMOST ALL DEVELOPERS WILL NEVER USE THEM.
If you are waiting on your special physx and cuda stuff to magically get used soon... you are gonna be waiting a long time.
Now, one last thing - DirectX 11? IamDuff, you said ATI cant do DX11... are you an idiot? (probably a redundant question).
You DO know that ATI where first to release DX11 graphics cards right? For like... 8 months they had their cards out first? Right?
Jeez... fanbois are getting more and more... I dont even know what the word is anymore.
-Ash -
If the 480M was the same price as the 5870s and the M17x cooling could handle it in SLi, I'd take it over 5870s in a second. nVidia driver support >>> ATi.
-
Well, yes and no to the driver support thing.
Does everyone remember nv4_disp.dll bluescreen?
Courtesy of google -> nv4_disp bsod - Google Search
Historically, nvidia's driver support has been just as bad, if not worse.
But, if the 480m's werent so damned hot and hungry - I would take them too.
Only because its a few FPS faster, and we need that in notebook GPU's.
-Ash -
Most people will agree that for the past few years, nvidia drivers are consistently much more stable, have fewer multiple GPU issues and are easier to use. -
When the "competition" was between the 280M and the 4870, the drivers were MUCH better for the 4870 regardless if it's SLI/XF or a single card. Now that the ATI XF drivers are limited for mobile 5870 I think it has people wanting new cards. I can say this much, I am 90% certain there won't be a M17x 480M SLI system AND there WILL be a single 480M M17xR2. I'm guessing without XF/SLI to worry about a single 5870 or 480M will both be great systems.
-
Actually with the release of Catalyst 10.7, the Crossfire problems I had are pretty much gone. The 2 games that had horizontal flickering no longer have it and Rivatuner isn't needed to make the driver work unlike Catalyst 10.6. So I'd say the 5870 xfire is every bit as stable as the 4870s at this point. Still, given the fact that Crossfire has been around so many years and ATi still hasn't ironed out their AFR gives me pause about their hardware and software design, especially since SLi doesn't have that issue AFAIK. Also, based on conversations I've had with BatBoy, the problem with the R1 wasn't the 280M but the nVidia chipset.
-
Arguing over the issues of current hardware of the red or green side is rather pointless IMO.
Right now, I use ATI. Because they are more efficient. Additionally, I have not experienced any of the problems people have reported, save for flicker on my desktop system - which was rectified by turning on vsync.
The reason I bought crossfire system(s) is to use vsync at 60fps, so that leaves me happy.
The last system I had before my desktop/laptop with crossfire 58xx systems was had a HD4870x2. This card came out of the box faulty, one gpu was dead.
This, however was better than the issue I had before that.
The machine I had before THAT used a 9800gx2. I would play age of conan on it, before the patch fest started to happen, and whenever a necromancer would show up on screen my PC would hard reboot. Turns out it was nvidia's (non standard) interpretation of shader code on one of the particle effects causing a really bad crash.
Needless to say, aside from the first RMA from the first HD4870x2, that machine ran beautifully after putting in the replacement HD4870x2, necromancers and all.
I dont believe in sides or fanbois - over the years both companies have had major stuffups. Real bad stuff.
The crossfire issues some people experience are minor, and there is a workaround available (vsync and triple buffering).
For those of us that had the nv4_disp error and the shader crash in AOC and other games, we had no workaround or acknowledgement, and the problems still exist to this day in certain configurations without a fix.
Nvidia decided to just make new cards instead. $_$
I may sound a little biased against nvidia, but Im not really. The gtx 2x0 cards were excellent, and I almost bought one - but then the HD5xxx series came out so I skipped it.
The fermi series just doesnt sit right with me though. Too hot, too inefficient. Not smart enough. To me its like stacking a 10ghz p4 against a 3ghz core i7.
The 10ghz p4 might win at most things but... really? would you stick that in your machine?
Anyway, these are all just opinions, but I do like to voice them.
People can buy fermi's if they like - I just think it isnt smart to do so.
-Ash -
Same chipset was used in the R1 regardless of GPU.
-
-
Still talking about driver support not overall system latency issues. I tested both myself. Changing nvidia drivers based on game played/benchmark run was a pita. When the 4870s were released we didn't have to do that. Doesn't mean it's not improved now but last fall this was an issue.
-
Which games required you to change drivers and was it needed to make it work or did you just do it for the sake of benchmarking? Because if it's the latter, then I don't really consider that a valid argument against the drivers nor would most people. But if there was legitimate problems running some games then your point stands. I've had experience with desktop SLi 260 and the biggest flaw I remember was microstuttering. Did you ever have horizontal flickering with the R1 + 280M SLi? Or the need to run RivaTuner hacks to make reference drivers work? Crossfire has been a major PITA up until this 10.7 release. Hopefully from here on out things go a bit smoother.
P.S. Strictly speaking from a hardware quality standpoint, did you ever burn up multiple 280Ms? I noticed you've already had two or is it three dead 5870s now? -
I was playing a lot of Fallout 3 and Borderlands last fall. I didn't see any flickering until the R2 was released. There were some obvious issues with Fallout 3 (stuttering/freezing)that technically weren't resolved until I installed the 4870s although it did not happen with the 8800GTX SLI in my M1730. With that said there was a pre-Cat 9.12 driver that had issues with Borderlands but Cat 9.12 was a great for it's time. I guess it's a bit all over the place based on system and driver used but there was a noticeable difference changing from the 280s to the 4870s. No work-around needed.
-
Well I am still hoping to see a R3 with a 480M GPU
Toss in Sandy Bridge, and USB 3
We have the Perfect laptop -
Taken as a whole, over many years, Nvidia drivers are night and day more stable than ATI drivers. ATI has come a long way in the past year, but you can name at least a dozen issues that arise from the driver, not 1 or 2. That said, I may like the Nvidia drivers, but I hope dearly that ATI keeps the improvements coming. Competition gives us (consumers) choices and keeps the market alive. But to think that Nvidia drivers (while having their own problems) as being on the same level of durability as ATI drivers is simply sad. Up until recently ATI hadn't even implemented 2D acceleration that has existed for years. Anyway, keep going ATI, but ATI isn't "there yet" imo. And until drivers have displayed stability on new games for months or years, I'm not going to wave their flag yet. 2 week old drivers have hardly been fully vetted by anyone.
The R1's main weakness was the NV chipset. Every enthusiast I've ever known who's tried NV chipsets, groans just remembering the experience. They are bad. Thus anything mounted on them isn't really getting much of a good chance to show it's stuff. Nvidia chipsets are not good at all.
However, it may take another generation of 480M v2 before we get to see things (heat, power and performance) where they should be. I don't think Nvidia has any room to think they are on top. While I might have issues with ATI drivers, the performance when comparing the two vendors is going to be great no matter which chip gets used. Nvidia has some work to do, and they are late, very, very late. Moreover, much like the desktop, it's looking like ATI is going to have 6-12 months of free roaming to sell Mobility 5xxx series with little to no competition.
If you overclock and have stability issues (EITHER VENDOR) you get what you get, no sympathy. I like benchmarking very much, but moaning about drivers being bad because of your OC is kind of lame. -
And yeah if M17x, USB 3 and 480M sli came together it would probably be worth a buy. We'll see if that really happens however.
-
and there's no much different between the 480 Sli and 5870 crossfire...
unless the R3 come with Nvidia FX quadro which cost 1000 dollar per graphic card that same as the 5970 radeon...
But i doubt it...
so lets stick to crossfire 5870... -
That and, i know about DX, i like physics, i've never heard or seen anything in OpenCL and some of my favorite games required PhysX to install even though i can't use it (my card is far too old)
And the CUDA core, because of it's built-in raytracing, makes 3Dmodeling programs MUCH faster to render. Specifically ones that use raytracing, but the way the architecture works i'd imagine all around.
But anyways, i'm gettin me a Sager/Clevo for the 480M. it's a pretty penny but i'm not picky, i'm use to old computers, so i'll still be blown away i'm sure, and it'll still last me a good number of years, and it even costs less than the Alienware m17x with the options i'd wanted. Even with a custom paint-job on it.
=)
Which means my time in this thread is dead
(the clevo model (w880cu)had USB 3.0, someone mentioned hoping about the R3's having that) -
Physx fails to impress me at all - can't care less about it.
Cuda is "ok" neither pro nor con, but Directcompute will again be more useful in the end and thus I don't feel Cuda is needed.
So while I may prefer the drivers I don't think all those added marketing media points are worth anything on the 480M's. And certainly I have zero wish for 3D, neither in a TV nor in a laptop.
M17x-R2 with Nvidia 480M (single/SLI), Rumors, Speculations?
Discussion in 'Alienware 17 and M17x' started by vradev, May 18, 2010.