I think we all were under the assumption it was on hold for the time being....
NP Fox, my pleasure. I too am looking forward to more and more liason with Dell over future topics. The more discussion, the better!
As above. Bill, as soon as it is available - drop a note on the linked thread and we will get going on it.
-
steviejones133 Notebook Nobel Laureate
-
Mr Fox will post major discussions from conference call but I wanted post a few quick small things Luis cleared up:
ALL m18x's are made in china. So are 14x,15x,17x
Numbers on motherboard near battery compartment are date stamps not motherboard revisions. All m18x have same motherboard.
Fan profiles are something they are considering, it's something they can do. No eta, or even guarantee.
Customizations for Dell laptops are rolled into Nvidia drivers.
Customizations for Dell laptops are NOT rolled into AMD/Intel drivers. They are working with them to make this happen in the future. This is why Dells driver for 6990m's has brightness control on function keys and New drivers from AMD/ATI don't.
Mr Fox will have a much more in depth follow-up post. This is just a teaser to answer speculation on where laptops are made and motherboard revisions. I remember pages of post on those two questions. -
steviejones133 Notebook Nobel Laureate
I thought one of the interesting answers was regarding the lack of "on the fly" gfx switching on the M18x - the explanation was informative and something I hadnt considered to be a reason why we dont have it. I particularly liked Louis' answer to why this wasnt available on a single gpu setup after explaining why dual gpu setups dont have this feature (I think it was Johnksss who asked that question but my line wasnt that clear)......but then again, you cant configure an M18x with a single gpu, can you? - moot point perhaps?
-
-
steviejones133 Notebook Nobel Laureate
-
FYI Dell is sending me another M18x with same spec instead of 580 Sli.
I was told the nvidia card is on back order, and we are just going to cross our fingers and hope it doesn't happen with the new system.
I'm quite happy with the spec, so I'll be happy as long as it doesn't overheat like the one I have now.
Good luck to you all as well. -
Louis says both AMD and Nvidia were not forthcoming on how their cards handle the resource allocation buffer and frame buffer allocations.
They simply don't want to provide a white paper on how things work, this is clearly some sort of capital and investment issue these companies are mulling on and they think they don't want to get the hassle of providing support for graphics switching on such platforms.
Can it be done? Of course it can, provide the white paper and the detailed layout of how their system works and we VLSI engineers will come up with any solution for the problems. But that means time and investment from AMD and Nvidia. They are simply saying a straight forward "NO", no technical or precise info why they won't. Dell cant do much if these companies don't play ball.
Its all down to AMD and nVidia coming forward and being open about how their systems work. As a dedicated logic is needed for this purpose. Nvidia has "Copy express engine" for their single card solutions, and they even talk about multi GPU scaling in their own white papers, but funny they wont actually tell how its done to any vendor like Dell.
There isn't a technical problem stopping anyone from making a solution, its all bureaucracy from AMD and nVidia.
Once a solution is done in layouts and simulations, VLSI engineers can send the design specs and sim models to any contractor in Taiwan to fabricate the chips. And in bulk the prices are going to be in the order of 5-15 USD per chip. That is not cost increase for a 2 grand system. -
steviejones133 Notebook Nobel Laureate
With that explanation, it does seem rather remiss of Dell not to bother with it....the assumption I took from it was that as it was a heavy system aimed at a true DTR, Dell didnt bother with anything to enable on the fly - basically assuming that no one would want to carry the lump around and use it!
Thanks for explaining the ins- and outs as I am quite the lamen in reality. -
Dell is completely innocent on this infact their hands are tied. -
My congratulations to Mr. Fox and each one of the other organizers and participants (from both sides). I feel that you all had represented us in the best possible way.
I feel this as a promising begging of a periodical agenda of metings on the many topics we all are passionate about them.
Thank you all pals!!! -
nvidia does not like intel and vice versa. for more than just pr reasons..
and a big thanks to mr fox! well done on the front end of things.
the back end would be the report you're posting in the next few days... -
If all the motherboards are the same revision, then why do earlier boards work and not later ones? Or was there just a big batch of defective boards made?
-
not even sure the boards are defective since 580m gpus work in them.
it has to do with the ctf circuit on the 6990m and some more technical stuff. -
1. Something is causing 6990m gpu circuit not to report correct temp to mombo.
2. When no temperature signal is received mombo should max fans but its not.
Luis gave us a match better explanation but I'm no engineer. "Basically" there's an interrupt that passes gpu temperature info to mombo. There is a signal loss or interruption thats causing a problem. When this signal is interrupted the fans are suppose to go max speed, but they are going zero speed instead. Some cards are not causing the problem, some are. They only have one test system that has the problem but when we send them the captures they'll have more to test. Thats it in a nut shell, plz wait for Mr Fox's explanation vetted thru Luis. -
Now that explains why at times my fans go full blast for no apparent reason with zero load and low idle temps. Interesting... My machine is from the very first batch, btw.
-
Yea Aiki I have seen that as well ... but its not a problem since increased fans are no worries.
I guess this means we have to put the end to all the AMD bashing thats been going on (as if, as Aiki has pointed out NV hasn't had their share of failure). There was a developing thread here of ... 'AMD screwed us, never buying AMD again, see ... NV is worth the extra cash' ... or something as such.
Good to have some clarification ... -
I work out of town, Northern Alberta, and just caught up with the thread. Glad you had the opportunity to speak with Dell regarding the issue(s). I am looking forward to reading the synopsis. Read the tease...interesting about the MB "revisions". Perhaps that, in effect, is what they are. It seems that although we end up at the same place (overheating); we get there in different ways. Even my 2 machines (original and replacement) exhibit different symptoms.
My compliments to Fox and everyone who contributed to this. I has been an education for me, just following the thread. -
nvidia is only 150 bucks more now.
-
In that case I should just push for a new system at this point with 580Ms. No point in holding on to potentially defective hardware. -
skygunner27 A Genuine Child of Zion
10chars -
580's are on back order. and now the new setup is only 150 bucks more.
so now the drama about price performance is here by "smacked in the face"... -
with this and amds drivers one would literally have to be an idiot to go amd
-
That might be for those who had terrible issues with it. I have yet to have a bad experience with either product. I will choose AMD at this time because the over all power draw is lesser and I care about my monthly bills.
-
they both draw 100 watts maxed out. -
My Specs don't draw more than 225 watts on most games un-overclocked and on single card using F@H its only 163 watts. Most games over around 125-155 watts regions on a single card.
Overclocked I touch 256watts, can you say the same for the Nvidia?
Even on Furmark the draw is lesser than 289 watts un-overclocked. I have already seen many reporting over 300 watts easy on furmark for the 580Ms unless those people were reading it wrong. -
The 580Ms draw more power at max load than the 6990s, about 10-15W each. That's what we have seen in-lab. All stock clocks, we never overclocked.
-
^^ Exactly and since they have 160SP clusters inactive on each of my 6970Ms which are essentially the same chips that got binned as 6970M instead of the full 6990Ms, the power draw is lesser on max loads compared to the 6990Ms and further lower compared to the 580Ms
There should be no debates over this one as its old news. -
-
Xen has some stock numbers as well for the 580's. ill have to re run them when they get around to sending my replacement.
so, not sure what page you guys are on. -
At stock 2920XMs the power draw could be a max of 5-10 watts, which is also doubtful considering even most apps show my 2720QM pulling 55watts when its supped to be 45watts. Unless its measured directly on the motherboard those programs are just approximations.
I get 289 watts max on furmark and without that I get much lower. Thats is considerable reduction compared to 580Ms.
And I am more than happy with these GPUs, my next target are the HD 7000s. -
yeah, i get far more that 289....but at the end of the day...you are right...the 580/sli/2960xm all over clocked and pull up to 380W...the amd was about 15 to 20 watts lower. (while running under dice)
side note:
if your worried about 20 watts....lol...better run the igpu...haha -
skygunner27 A Genuine Child of Zion
You guys are suppose to be in the "Big Leagues"....who cares about power draw or monthly E bills?
My IGP draws less power than both of your 580M SLI & 6990M CF!! lol. Don't even make me change my Intel HD 3000 from performance to balanced!!
Don't even...................... -
So I care about the 4-5 dollars I save every month, that's ~60USD wasted per year when I am just fine with this setup at the moment. This is in El Paso, pretty soon I will be in Singapore where electricity bills and room rents are the biggest killers in terms of cost of living. And the moment the HD 7000 launches I am getting rid of my HD 6970Ms because moles in the know are sporing the TDPs for the HD 7000 mobile series as looking very very good. It will be no where near 75 watts stock if all goes well at TSMC.
As long as I get smooth frame rates on the vast majority of titles out there and its cheap, I go with that one. Regardless if its AMD or Nvidia. At this time that is AMD.
Its useless to pay 150 USD more for Nvidia and then pay 60 USD more (in El paso in Singapore even more) over the year all for nothing. Its like am paying 150 more or whatever amount just so I can pay more in terms of electricity bills. Besides all the 3 top options 6970M/6990M/580Ms are very playable on most titles, so why the need for the absolute fastest at the expense of power bills? Not for me. -
yeah, im just giving him a hard time. hes a good guy though and knows quite allot.
side note:
dude..i run:
42 tv
phase change unit
400W
portable ac
1200W
water chiller
680W
two gtx 580s and an over clocked 980X
1680W
so 20 watts is of no concern for me.
-
No issues with thermal shutdown.
MB Rev. F2 1136
M18x
i7-2960XM
16GB 1600
2x 6990M
2x 500GB Momentus XT RAID 0 -
The max I tried was taking my Athlon thoroughbreds back in the day from 1.4Ghz to 3.15Ghz on custom R404-A, it was an expensive affair that I am not interested in anymore. -
-
I have been waiting to see if dell found a solution to the overheating issue and it looks like it may take some time so think I will call them but not sure what to expect? What is the best way to proceed?
Thank. -
AW Lead Graphics engineer Mr. Louis confirmed they have put 6990M replacements on hold as they need to get to the root cause. No point in probably sending out more of the same when they haven't tracked down the exact causes just yet. -
-
-
Stick with HWiNFO64 to run them full blast, just make sure the settings are not more than 3800 rpms for the GPUs otherwise they seem to cut off at random.
-
-
Very good, so you don't have to worry about a re-paste for a long while to come. -
I dont seem to be able to have HWinfo64 work to control my fans :-/
-
-
HWinfo64 can indeed control fans, on all m18x's. Can even have a custom rpm setting based on cpu temp. U guys using it right?
Matter on fact its one of the 2 known work arounds from the thermal shut down problem dual 6990m's are causing. -
Can you let me know what option I have check to allow me access or where in the program I can find it? thank you
-
OK
First go into config. uncheck both GPU I2C Support and GPU I2C via NVAPI. That will make it load sensors much faster. "press ok"
then goto senors:
@ bottom of pop-up window, press little button to left of "loggin start". Looks like a fan.
use custom auto
leave lower settings alone change the max's to only 3900 rpm. -
Thank you very much! Very helpful. Any reason to keep it at only 3900rpm?
*Official Alienware M18x AMD CrossFire Discussion Thread (re: Dell/AW Conference Call)*
Discussion in 'Alienware 18 and M18x' started by Mr. Fox, Oct 29, 2011.