Statement 1: There is nothing wrong with the 7970M MXM itself. I have MANY of them and they function exactly as I expect. 2.2 TFLOPS, 115 GB/sec DRAM and 12 GB/sec PCIE bandwidth every time I run it. Exactly as advertised. Faster than any other MXM Ive ever tested.
Statement 2: The MAIN problem is how the laptop is constructed.
Statement 3: The secondary issue is AMDs driver.
(2 & 3 may be swapped depending on your perspective, but from my POV, IGP/MXM switching without an integrated mux like the M17x-R4 uses is fundamentally broken).
Proof of 1: Take a 7970M MXM, put it on an MXM carrier, install it in an Ivy Bridge motherboard, install the AMD 12.9 drivers and modify the driver so it thinks the MXM is really a Radeon 7870 PCIE card. It functions exactly like a 7870, just with a lower set of clocks. Ive done it. It works.
Statement 2 explanation: In the P150EM (and other laptops that swap IGP/MXM without a mux) the pins of the INTEL CPU are driving the display connectors. The video outputs of the MXM are not connected to anything. They cannot drive any monitor. The MXM renders a frame into its local DRAM (GDDR5 for pretty much everything now). This is full speed. But the MXM isnt driving the monitor, the INTEL CPU pins are. That frame has to get from where it is (MXM GDDR5) somewhere else before it can be displayed. So the MXMs GPU has to DMA it over PCIE to x86 memory, the INTEL CPU has to pick it up and shove it to the monitor.
While the MXM can chug away on the next frame (double buffered), eventually itll finish with that one too and basically go idle waiting for the flawed display path to drain. This is (almost certainly) why the 7970M shows poor utilization. Its stalled by the display path.
To me this is architecture is FUNDAMENTALLY FLAWED. Instead of using wires (miniscule power, freaking fast, virtually no delay), it uses significant PCIE and host memory bandwidth (and additional power) just to display pixels. Lets do a little math: 1920x1080x4BPP*120Hz = 1 GByte/sec of PCIE bandwidth and AT LEAST 2 GB/sec x86 memory bandwidth (GPU write / CPU read for display) 100% of the time youre playing a game. The 2x is if the driver is implemented optimally (GPU writes directly into the x86s display buffer). If not, its 4x (GPU write over PCIE, x86 copy to display buffer, x86 read for display), or 4 GBytes/sec x86 DRAM bandwidth just to display an HD 120 Hz video. 100% useless overhead. Fundamentally flawed.
So ask yourself: who designed this path into their reference designs? Who pushed it? Who stands to gain most from it? Who does it hurt the most?
The Alienware engineers foresaw this. Thats why they put in the video mux. Basically its an A/B switch for the video output. Using the Intel CPU for display? Flip the switch to A, so the iGPU pins are connected to the monitor, put the MXM to sleep, get low iGPU performance and low power. Doing high-performance GPU work (GPGPU like I do, or playing games)? Flip the switch to B, which connects the MXM video out pins to the monitor, put the iGPU to sleep, get high MXM performance and high watts. Best of both worlds. More foresight, engineering effort, and $ per laptop ($10?) required.
I think theres a better way to do this (use AMDs persistent buffers so the pixels never hit x86 memory), but my guess is thats not how theyre doing it.
Statement 3 explanation: NVIDIA is doing it better than AMD. I have no visibility into why. Conjecture: could be a few things. The AMD driver might not be writing directly into the x86 display buffer so it has to do the extra copy using the x86. Perhaps theres a signaling / VSynch beat frequency / latency issue. Either way, I dont see any reason why the AMD solution should take a larger hit than the NVIDIA solution from a hardware perspective. The question is why? Incompetence? Lack of priority? Intel likes NVIDIA better and shares with / helps NVIDIA more because they really hate that AMD exists? Who knows?
Conclusions:
1) The M17x-R4 hardware does it right, but the drivers for that seem to be less than optimal (manual works, automatic doesnt always)
2) Clevo (and others) could have done it the same way but chose not to
3) NVIDIA does iGPU/MXM switching (which isnt really) better than AMD
4) New AMD drivers can make it much better
5) Theres nothing wrong with the 7970M MXM
-
-
2. Correct
3. Well it is better in the way that it actually works.
4. Truly hope so.
5. There is, if new drivers cannot solve the issues (AKA the problem is hardware based). -
From http://www.nvidia.com/object/LO_optimus_whitepapers.html
Optimus Copy Engine
In order to utilize the software innovations of Optimus, the GPU requires a creative new hardware feature, known as the Optimus Copy Engine.
Optimus avoids usage of a hardware multiplexer and prevents glitches associated with changing the display driver from IGP to GPU by transferring the display surface from the GPU frame buffer over the PCI Express bus to the main memory-based framebuffer used by the IGP. This occurs when an Optimus Profile indicates the application requires the GPU to be enabled. The key to performing the display transfer without negatively impacting 3D performance is the Optimus Copy Engine.
The Optimus Copy Engine is a new alternative to traditional DMA (Direct Memory Access) transfers between the GPU framebuffer memory and main memory used by the IGP.
Traditionally, mem2mem DMA transfers are performed by the 3D engine. To preserve coherency, the 3D engine is blocked from rendering until the mem2mem transfer completes. This time-consuming (synchronous) DMA operation can stall the 3D engine and have a negative impact upon performance. The new Optimus Copy Engine relies on the bidirectional bandwidth of the PCI Express bus to allow simultaneous 3D rendering and copying of display data from the GPU framebuffer to the main memory area used as the IGP framebuffer. (The IGP then reads the display data out of the framebuffer and sends it through a display interface (DVI, HDMI, etc) to the display). This asynchronous DMA operation provides data coherency and allows for a dramatic increase in both performance and efficiency. -
-
Considering the fact that enduro is a part of the 7970m and is advertised as a feature of the card does in fact mean that something is wrong with the 7970m.
Don't kid yourself. -
If the assumption were true that it is in the actual hardware between GPU/iGPU and display, then the NVIDIA 680m would suffer from the same performance problems. However, as plainly seen by 680m users who have posted their results, that is definitely not the case on muxless laptops supporting iGPU.
Given we know the 680m is not a problem on these laptops, the problem must exist either in the AMD drivers themselves OR somewhere where the AMD driver meets the hardware in causing the iGPU kick in when it is not needed. -
-
-
-
Kind of the opposite for the first couple years of their implementation of GPU-to-GPU DMA. Marketing BS until the technology caught up. -
-
If the limiting factor is the muxless design (which is what the op basically said), how come does the card perform perfectly in certain games/applications and not at all in others. Your point does not make sense with the problem we are seeing. The fact that each frame has to be copied does probably hinder performance but it is not what causes the issue at hand. It is probably more related to the fact that this memory transfer is not handled properly by certain game engines (which aren't really made to do such memory transfers optimally to begin with. AMD should have seen this and find a solution before selling this card and suggesting the muxless design to resellers.
Edit: Also, why does utilization drops in certain areas of a same game? In games like crysis or Wow, it seems that when you are outdoors (more objects to be rendered), utilization drops significantly while the CPU is still hovering aroud 60% (not a bottleneck) -
-
What i think you are right about is that this is a driver issue, not a hardware one, that has been overly proven (to my understanding)
That AW had the wisdom to use a muxer to avoid the incoming issue, there is absolutely no question about. But you cant blame clevo for not doing it, although we would have loved they did.
Its enduro and enduro alone to me, and that should be driver fixed. -
-
you keep saying that, but i have yet to see any proof of this. if anything, the fact that multiple driver releases have come and gone and AMD have not fixed the issue for 3+ months that they have known about it makes me doubtful that they can properly fix it with drivers at all. -
-
They ahve absolutely no proof that this is a driver issue at all. The fact that alienware works with enduro turned off doesnt prove it is a driver issue becuase it is fixed by adding an additional piece of hardware (the mux), which only proves that the card will work when enduro is completly bypassed by a mux. If the only way to fix the 7970m is with the use of a mux, well, I'm glad i jumped ship then and I feel very badly for everyone else who placed their faith in AMD.
-
I copy here again my reasoning to say its not a hardware issue, its just adding and analyzing all the info and data we currently have, i might be wrong of course, but i keep thinking its a driver issue.
And about AMD having done nothing in 3 months i adress this notion too, i quote here the most relevant part of my spoiler:
Only one thing is clear for now, and that is that ENDURO technology is responsible for underutilizing the dGPU and consequently rendering low FPS for some games and/or game maps and/or game configurations.
Now as a scientist i first analyze the empiric data and then try to formulate an hypothesis, and then test it to prove or discard it. Having avidly read many users info and comments, my current understanding of the problem is this:
I DO NOT think this is hardware issue. Why? well, the proof is everywhere, not only the 7970M performs flawlessly and very well in Alienware with enduro shut off, but also in clevo machines it performs perfect in some games and even in most games affected it performs normal in certain maps, multi or non multiplayer settings, etc, etc. The most common complaint is that FPS go up and down along with GPU utilization.
My understanding is that a hardware piece is inherently DUMB as a rock, and that its performance is only dummer as its operator (in this case Enduro) and the instructions it receives to operate.
Another evidence on this subject is that different catalyst driver versions do affect the performance, maybe not in the way we want, but it doesn just "stay the same", either worst, better or weird would be proper terms.
That said and taking the comments of the most knowledgeable people around, the issue IS driver related. The fact that Nvidia had very similar issues some years ago when they launched Optimus, just adds to this notion.
Another fact that can be interpreted in many ways is AMDs and most resellers silence about all this:
We can go all paranoid and think there is a huge conspiracy to shut us down and make this problem go away.
I can even picture some black clothed NVIDIA agent pouring a misterious liquid on the hardware or hacking AMDs drivers with a hidden flaw to cause all this.... the options are infinite...
Now, taking into account Anandtech's article, Sager's few deleted comments, mythlogic's comments, even short and very scarce AMD posts (Mark AMD), wich can be regarded as the most "official" position on the matter, the conclusion for me is this:
1) AMD has been slow to detect, understand, adress and moreso recognize this issue or its mere existence. This is not necessarily bad, only unfortunate (for them and us customers).
2) AMD took the usual attitude towards these kind of issues and remain silent, doing nothing at first hoping it was a mere configuration or driver installation issue and that other users would help the affected ones.
3) After seeing the issue was real, not fixable by users and configuration combos, not even reseller drivers, and that even some resellers were complaining to them and in open forums, AMD decided to put some people at work and asked their associates (i.e. sager and others) to back up their silence policy for the time being, releasing a few short phrases to gain time. "We are working on it"
you can argue this decision but i aint something so weird as it seems. I would say this happened only 1 month ago or so.
4) Viewing some light at the end of the tunnel AMD starts to release some info to their resellers and reviewers like Anandtech, in order to have them quiet and informed that they are "on to it". They still remain silent either because by now they might as well and because the fix is NOT ready or complete and they do not want to enter any debate without answers. This is happening now.
So as you see there is no need to be paranoid, there is much more simpler explanation to events.
Some would say AMD should have detected the issue before launching the 7000 cards, but seriously speaking this aint so true.
Why?
Well you need just read some of the first reviews that came out for 7970M, i dont think any of them were biased, they even mentioned enduro and its need to be improved. But if you test 20 or more games, you run only 1 or 2 minute benchmarks for them, you dont have someone playing hours and switching to multiplayer modes and what not. Also many games perform correctly so there really wasnt and still isnt an instant crash of sorts to detect a major issue. Also, the card works, even if it underperforms some times, so detecting this anomaly wasnt so easy to begin with. Not saying AMD is to be absolved of all charges, but i really dont see them as the villains many are picturing, they have been just too slow, too dumb and too arrogant, but then again who isnt sometimes?
Whats your fckng point you ask?????
Simple, the issue will be fixed to a good degree, not 100% probably, not for all games, but no card works flawlessly in all games, thats a whole different topic.
so have hope and dont crush the only option we have to avoid total monoploy in the graphics world.
What can we do then? sit back and wait?
NO, absolutely not, the ball is rolling and growing and we need to stop it soon or it will become an avalanche.
You can cry.
You can shout and spit.
You can test and inform your results.
You can post or just read.
Whatever you do will be more or less usefull and might help getting this boat to port.
best regards
Voz
For more reasoning read the spoiler, i know its long but it sums up my reasoning to say its a driver issue.
As for faith i am placing it on general common sense more than in AMD. There are logical and ilogical explanations for this whole ordeal, and the more pesimistic ones are just as wrong as the too optimistic ones. The issue is driver related and will be fixed, but they already didnt fix it fast and they probably wont fix it flawlessly.
Again, i am no techie at all, i draw my conclusions on everyones findings and tests only, so i might perfectly be wrong, but so far no one has given me enough evidence to make my view turn around (wich i perfectly might if the evidence is there to interpret).
regards
Voz -
BonsaiScott,
We could tell everyone until we're blue in the face that AMD isn't the root of this problem. The fact is, Intel's reference design and Clevo's blind adherence are at fault.
The fact still remains however, that AMD has to fix the problem. Not Clevo or Intel. -
_____
EDIT:
-
-
The OP is more detailed, but I was trying to summarize for the lurkers. -
-
Clearly the root of enduro problem is that there is a bottleneck somewhere and some rendering engines, maps or scenes may never reach the limits of that bottleneck before they are limited by GPU horsepower instead - thus hitting the 99% utilization mark before this enduro bottleneck becomes a factor. We know that this bottleneck is not GPU speed as increasing GPU clocks or memory bandwidth has no effect on the issue. CPU also does not seem to be a factor as going up and down from 2.8ghz to 3.8ghz had almost no effect on utilization.
again, I will wait for those new drivers before speculating further, but pretending everything is peachy only removes the pressure from AMD to deliver a rapid solution. -
I have yet to see proof that the m17x-R4 has a dedicated hardware mux and the Clevo models do not. My hp dv6z-6100 originally did not have fixed (i.e. non dynamic) graphics, but it could be enabled with a modified BIOS.
Everyone who is commenting about the hardware intricacies of the Enduro should read this article. I'll quote a very important part:
" As far as getting content from the dGPU to the display, the IGP always maintains a connection to the display ports, and it appears AMDs drivers copy data over the PCI-E bus to the IGP framebuffer, similar to Optimus. Where things get interesting is that there are no muxes in AMDs dynamic switchable graphics implementations, but there is still an option to fall back to manual switching. For this mode, AMD is able to use the display output ports of the Intel IGP, so their GPU doesnt need separate output ports (e.g. with muxes). With the VAIO C, both dynamic and manual switching are supported, and you can set the mode as appropriate. Here are some static shots of the relevant AMD Catalyst Control Center screens."
It's not a question of a dedicated mux or no dedicated mux. There is no dedicated mux. The IGP is the mux. It's just that Alienware has enabled fixed graphics in their BIOS whereas Clevo/Sager has not. (Also, Enduro does copy to the IGP framebuffer.) To this day the official BIOS for my DV6z still does not allow fixed graphics. The takeaway? No fixed graphics doesn't mean that it's not in the hardware, nor does it indicate a problem with the construction of the notebook.
Also, the reason why nobody uses dedicated muxes anymore is because they were unstable and inferior to the muxless design (e.g. screen flashing when switching GPUs) and Microsoft pretty much told all the manufacturers to stop using them. As the 680m shows, the muxless design isn't intrinsically flawed. It just requires more robust drivers. Not sure what conspiracy theory you're getting at.
Finally, the MXM is a standard and known to both Nvidia and AMD. It is up to the Nvidia and AMD to make sure that their hardware and software complies and works properly with the standard, not the notebook manufacturers. This is AMD's problem through and through, not Clevo's or Intel's. -
Drivers are likely the answer considering the 680M works fine with similar performance as the AW. So while it makes sense the 7970M is fine, I don't see how one can conclude the implementation is flawed. Only left to me is drivers. Not Clevo, not AMD engineers, it's AMD's management and driver team.
-
Personally I don`t think its just Enduro thats the problem. Power Gating is also there somewhere in the mess, which is why I think some games suffer with the GPU utilization while other doesn`t. So I think AMD is working really hard optimizing the drivers to ensure that some parts of the GPU doesn`t shut off while playing with the affected games. And apparantly its no easy task, and I can only imagine the coding task they have in front of them.
Its no wonder why Nvidia struggled with Optimus earlier too when you think about the complexity. Not only is it x different brands using x different GPUs with Optimus, but its also X different games with different GPU scaling which varies in intensity. They have to calibrate all of these scenarios, write them in to code and basicly patch up whatever doesn`t work.
imo -
Congratz to your 680M purchase, $2300, damn.
-
Yes I have. The GT70 is on its way
-
To really test your theory we would need some kind of bandwith/scene/map data, and see if it shows the bottleneck you are talking about. But i have no idea how to do that, or if it really is something meassurable.
As you say it aint an easy task and it will take several driver releases to fully adress most of the issues, but at least the ball is in the park now.
regards
Voz -
To me this seems like a big really big headache, which currently humanity has the technological capability to solve, but doesnt, and when i mean solve, i mean to work flawlessly, same as a desktop gpu works, because of various reason which are money related.
The explanation in the original post, was one of the reason i tended to believe at first, as i do now, that there won't be so many problems with the card in a x7200 chassis, where there is no other gpu.
Probably this whole iGPU/dGPU trouble would have to be redesigned from scratch, and take all things we have learned so far in consideration to create a perfect flawless working solution.
For that to work, one would have to bring to the same table, INTEL (intel cpu), Nvidia (Nvidia GPUs) and AMD (GPU and CPU manufacturer), and i will give you only one try to guess why is that not going to happen.
In case you can't figure it out for yourself, i let you in on the little big secret. Its because of monetary interest.
Then its easy to see why intel are probably hand in hand with nvidia, since amd can do both gpu and cpus, and if they were to have the upper hand in this, the others would suffer.
I would say, we should throw away the current monetary system, destroy world hunger through a resource based economy, then we will get our flawless switching we all want to have.
The situation is more complex one would care to agree.
And if all engineers would sit on one table as one team, im sure more than one solution will come to thought.
One such solution i can think of just this instance would be the integration of a secondary mini gpu on the big gpu, or perhaps algorithms to disable 95% of a big gpu, resulting in a gpu will lower power consumption and lower performance. -
-
Also, if anything Anand's tests were not very encouraging. His Kombustor results are the same as mine with current drivers (in fact right now i score a 2% higher utilization in my DX9 test than he did with that beta driver) and that bottleneck clearly still existed as decreasing detail still decreased utilization severely. You can choose to put on your rose tinted glasses and start celebrating now or wait a little and see what AMD release before you jump to any conclusions. -
Captain_Bobby Notebook Consultant
Somehow, I equated "nothing wrong" with "working properly." -
-
Allow me to attempt another track:
For just over 6 months I have had 7970M MXMs running in 4 different computing systems, all of them with vastly different architectures. The only one most people here will be familiar with is the Alienware M17x r2 (see my post in the "OFFICIAL* Alienware M17x Owners Lounge - Part 1" here).
In all that time, on all those platforms, I have always gotten the performance I expected, and I have beaten the SNOT out of them (108*C is a real limiter, but safe to hit, many times). None of this wacky slowdown business people are reporting here. The only reason I even heard of this issue is because I ran across it while researching my next laptop purchase (more on that later).
So what's the difference? None of those 4 systems I have has an Intel iGPU.
(Tongue in cheek) Conclusion: Intel iGPU breaks the 7970M MXM.
(Real) Conclusion: Without forcing pixels through the Intel iGPU, the 7970M works as advertized and is IMO, the best MXM there is (at least until I can get a GK110 on an MXM). Certainly for what I do. Clearly there's something broken with the current implementations of the 7970M MXM in current laptops which force pixels to be displayed through the Intel iGPU. While the functionality is correct (although switching appears to be more manual and less automatic than most would like), the performance is seriously compromised for some games.
Is that because:
1) The drivers need to be fixed? Knowing what I know, I believe this is the most likely scenario. The AMD driver guys just have to fix it. Unfortunately, things like this (requiring close collaboration between 2 companies which are competitors) are much more challenging (management) and complex (technically) than most realize.
2) the hardware necessary to support a vastly lower impact on performance (like NVIDIA provides today) broken? I really dont think so. AMD (and NVIDIA) has had the hardware functionality to support what NVIDIA marketing calls the new Optimus Copy Engine for many years.
3) Laptop manufacturers should use a mux to switch between iGPU and MXM video output? Yes! Architecturally, I believe that the approach Intel wants everyone to use (Intel iGPU is the master, NVIDIA and AMD GPUs are the slaves) is fundamentally broken. Other than the obvious inefficiencies (iGPU / MXM coordination, additional latency, extra GPU utilization), it loads the CPU memory subsystem needlessly. How much time and money do you spend buying and tweeking your CPUs memory subsystem? Did you spend the extra cash to buy DDR 1600 instead of DDR 1333? That bought you, best case, (1600-1333)/1333 = 20% improvement in CPU memory bandwidth. What does the P150 get for actual CPU DRAM bandwidth? Ill make a guess and say 16 GB/sec actual (not peak theoretical). 1920x1080 at 120 Hz is 1 GByte/sec. Pushing that over the PCIE bus from the MXM to the host where the Intel iGPU can pick it up takes up 8.3% of the available PCIE bandwidth (GPU to CPU), which isnt so much of a big deal because its usually lightly loaded during gaming. However, the process takes either 2x or 4x that much of host CPU memory bandwidth. That means youre losing 12.5% to 25% of your CPU memory bandwidth that you spent real time and money on to optimize and make faster just because Intel wants the fast GPUs to be a slave to their iGPUs. To me, that is fundamentally broken.
The 7970M in my M17xR2 has been my main workhorse for the past ~5 months. It has performed flawlessly for me. For me, the main limitation of that setup is the M17xR2s x8 Gen2 PCIE. I was researching for its replacement. Thanks to others from NBR I know the M17xR4 runs the 7970M in x16 Gen3, which is 4x faster than in my M17xR2. I have measured the 7970M MXM at 12 GB/sec over PCIE in non-laptop systems (same speed as the HD 7970 PCIE workstation card). Being able to get that in a compact, portable, affordable system is amazing. However, I really would like something a little smaller and lighter but still with a fantastic screen. When I discovered the Sager/Clevo P150 I thought I found exactly what I was looking for. Then I dug deeper and found the screwed up path pixels have to take to get to the screen from the MXM. Thats a killer for me. Even if AMD is able to optimize the path so it has, for all cases (not just some), as little impact on performance as the NVIDIA solution, I will not buy it. It is fundamentally broken.
Itd be interesting to know if the P150 has x16 Gen3 PCIE between IB and the MXM, but its academic (for me):
Sager / Clevo are you listening? I wanted your laptop. For the lack of a ~$5 part I would happily purchase it. Alas, whoever will give me the best deal on an Alienware M17xR4 gets my $.
Disclaimer: I do not work for Dell, Alienware, AMD, ATI, NVIDIA, the Federal Government or MacDonald's, nor do I have a financial stake in their success or failure. Well, except for the government. However, I do use their 'products' (except McD's), and have been programming GPUs for computation on some of them for nearly 11 years. -
BonsaiScott, what you say sounds very logical and fundamented, and i agree with it in most parts. BUT you fail to adress that enduro is responsible as software for the underutilization issue. The %s of bandwith usage you refer to seem also correct and therefore there is no bottleneck there.
That CPU might be crippled by this way of transferring the pixels sounds to be the only real concern, even tho it remains to be proven that 12 or 25% bandwith usage by pixels really affects the CPU or not.
I quote you
Anyway all this would apply to both nvidia and ati cards, so even if the muxless design is "broken" as you state, thats another matter entirely and not waht everyone is talking about.
I do agree its an important topic to adress the validity or existence of switchable graphis technology and its current hardware implementation, even tho there are obvious reasons for it.
Still, the issue remains the same, the main problem that causes all the issues people complain about is ENDURO and its AMD drivers. And that is a fault on AMDs part, its like selling a car without 4th and 5th gears for some terrains (i.e. any computer with enduro turned on).
Reading a comment of Jarred in its own article in Anandtech he adresses the bandwith usage too, saying optimus had similar problems in the beggining. i still dont think this might be responsible for the underutilization reported, even if its high i aint more than 5% of bandwith. He also seems to answer your academic question:
"PCIe 3.0 can in theory do 16GB/s", I think that means the clevo P150 has that interface.
I quote jarred here:
-
nice post dude. unfortunately it makes me kinda sad owning a clevo p150em now :/
+rep -
-
Here's why:
-
R3d, there definitely is a mux in alienware systems and seeing how Enduro is plagued with issues I doubt any other manufacturers will omit a mux from their future designs.
-
Did you read the link?
-
that link is from 2011, way before Alienware m17x r4 was released. Maybe dell saw the poor performance of 7970m and included one?
-
Occam's razor. Fixed graphics isn't supported by NVIDIA so it's not present on the m17x for NVIDIA gpus. Fixed graphics is supported by AMD so it's enabled in the m17xs with AMD gpus. I don't see how Alienware "definitely" spent the time and money adding their own mux just to duplicate features that were already present in the AMD system, and developing their own drivers to switch between switchable and dynamic on the fly, and then go on to disable all of that for NVIDIA systems for some reason.
-
R3d: Alienware have MUX in their notebooks and they had that for a long time now, ever since 6990M. And no its a dedicated hardware, not something within the CPU like you suggested earlier. If it were, then Sager/AMD would have fixed the Enduro problems decades ago.
There is a BIOS function within the Dell BIOS where you can enable and disable the MUX if I remember correctly.
In the current setup the MUX behaves exactly like a typical MUX completely shuting off one section of graphics to the point its not even detectable to the BIOS as if it doesn't exist. Dynamic Switching needs both graphics solutions to be 'visible/system aware'. Only possible through dedicated logic. -
Pretty sure fixable graphics work with both 680m and 7970m as both cards can be used in p150hm laptops.
-
The GPU is wired directly to the screen and not going the route through the IGP like in the EM notebooks -
yep, my post was in reply to R3d's.
-
And then there's all the AMD dynamic switchable notebooks that had fixed graphics enabled through BIOS updates, which function exactly like the muxless notebooks that had fixed graphics to begin with.
There is nothing wrong with the 7970M MXM
Discussion in 'Sager and Clevo' started by BonsaiScott, Sep 12, 2012.