I am on the Nvidia beta 285.38 drivers. I had left my computer on for a few days and was suddenly was getting distortion/artifacts on the screen. I restarted the computer but that didn't fix it. I also experienced the issues outside of Windows which made me think that it was not the drivers.
I went ahead and reinstalled the OS for the heck of it and have seen the problem occur randomly once in the week since the reinstall. It hasn't come back since. Anyone else have this problem or know how to resolve it? I am still a few days within the 30 day return period and am reluctant to keep the machine if this is going to become a serious problem later on.
-
Kingpinzero ROUND ONE,FIGHT! You Win!
Basic question: what are your temps under load and idle?
-
CPU and GPU temps, use HWINFO64 to get this
-
I had a similar problem with my G53SW: randomly after a few hours or a few days of idle, I was getting artifacts and driver crash continuously until a freeze (and a forced shutdown) or a safe total shut down (not just a reboot).
I thought that it was a hardware problem but it was strange because I wasn't getting any problem with a continuous use and furmark, even with overclock, it happened only at idle.
After some days of research, I figured out that the problem was the default idle clocks that was too low for my card.
So with Nvidia Inspector I changed the Performance level 0 which was 50/101/135 to the same as Performance level 1 (202/405/324), it will be stable since PL0 and PL1 run at the same voltage.
The idle temperatures didn't change and I don't have any problems since I've done that -
GPU: 47 idle, 70 -
Thank you Chastisty. I searched the whole forum and couldn't figure out what program to use. I tried Ntune, but that just bluescreened my system, probably cause Ntune is like 4 years old =P
-
I will give Nvidia Inspector a try if I get this problem again. Does your card/system perform faster with the Performance level change? -
placing bets your error was just the windows tdr event-"the kernel mode display driver nvvlddm blah blah blah has stopped responding and was restarted".
one worthwhile question to volati1e and megaltariak. when it would artifact at idle then reset the display driver would it also always jump the display backlight to max brightness, but atk still think it was set to whatever it was previously?
usually run at minimum backlight brightness so fullbright from min is real easy to spot, and in my case atk thinks its at minimum so it requires increase backlight then decrease to hit pre-crash setting.
still vote for this being a bad gpu symptom. find my old posts and follow to the xoticpc forum where i have some screenshots, and some camera pictures of the screen when the idle artifacting happened. the doesn't go away except on a full shutdown startup cycle and problems with display at low load seem eerily familiar, and what worries me is these machines have more or less been around for a year, and it seems like people reporting this issue is really a new thing.
funny thing is the driver crash tdr event is what causes the lockup from this.
windows tdr settings are default at 5 driver resets in 1 minute = force a bsod and shutdown. if there's a display issue during that, that's all she wrote- force shutdown because even the a20(numlock) won't respond. thats a hard lockup.
edit: also take a look at page 137 of the asus g53sw owners lounge and tell me if that big old image looks familiar if you would. trying to see how many cases of recently acquired machines have this specific issue, and if it presents the same across the board for us. -
@volati1e:
No this will not increase the performance in games since PL2 is used during 3D games. Maybe this will remove some lag when you scroll fast a web page but this is barely noticeable.
Note that you can run this "overclock" ( not a real one since you are just replacing PL0 with PL1 ) by creating a shortcut and dragging it from the desktop to the Startup folder (start/all programs/startup)
My computer is totally stable with this and I even overclocked all PL:
PL0: avoid crashing issues
PL1: make some 2D games use this one instead of PL2 (lower voltage -> less power consumption, heat and noise, dropped temp from 64°C to 55°C)
PL2: Performance
@steelblueskies:
The error was exactly that but it happened continuously until shutdown unlike when you overclock too much.
My brightness was always at max so I didn't noticed this problem and don't want to reproduce it ( I like having big uptime )
I don't think that I have a bad GPU since I don't have any other problem. It's probably ASUS which did only put a good margin for higher frequencies and forgotten to test low frequency which made cards who don't support underclocking well crash. -
Steelblueskies,
I don't recall having the pink artifacts you do on page 137. Mine was more like these white and green bars that were all over the screen.
You are right about the display driver. It would crash and then restart the display driver, only to crash again. It did seem like it would make my display go full brightness when it happened, but as of now I can't reproduce the problem any more.
I will take pictures of the screen if it happens again and then post online. I'm still worried about the power button issue since that seems to be the big thing with that system, but I have yet to run into it (knock on wood).
Did you replace your GPU? If so, did it fix the issue? I really hope that isn't what the problem is with mine. I don't want to send my system off for repair and wait weeks/months to get it back, and probably all scratched up. -
tried it, ran a folding at home gpu client session, took a nap. woke up to the same issue. max temp hit was low 80'sC.
the fact this error can only be reproduced by time and appears sometimes fast, sometimes three days later is maddening. at least it usually appears within 4-12 hours of sitting.
still pending on getting the machine in. pulled it out of the untaped shipping box to retest this case with. i figure every bit i can hand off to whatever asus certified tech has it land in their lap during the 3 day observation xoticpc talked them into doing( as opposed to whatever is standard, we want them to see the problem at least once) will help insure it only has to ship once to get it fixed( and thus minimize chances of damage, time without gear, and chances for someone to fubar something else.
going to have to look into how to mod power state settings in ubuntu based linux distros as well. had them appear running a live dvd at one point too which says hardware, but i'm a lot less clued in for deep management and hardware monitoring/optimization on that side of the fence. heck the livedvd copy wasn't even using the nvidia drivers for linux.
of course if we can narrow it down as a group we can probably ensure a more reliable fix, and faster turnaround for some of us and future cases as well.( not to mention hpefully finding a faster way to test and see if replacement parts have the same issue
also any gpu which isn't operating at stock settings where others are is a bad gpu, unless more than half the users have the same issue in which case a short petition campaign and a fix from asus is the historical response. just look at the g53 and g73 series with the ati cards that were also unstable at stock clocks resulting in gsod/vsod and also had thermal issues.
but that was most of the users who were seeing those things to some degree.
with the gtx460m in these units we are among the first few seeing the problem and reporting it so far.
we've got about four of us here with one or two other possible cases, which says THIS IS NOT NORMAL in big bold neon letters. after all the users here and at the asus rog forum aren't exactly shy about seeking consensus, help, and helping. so welcome to the vanguard on a new issue - may whatever merciful tech gods exist have mercy on our wits and poor machines. -
I don't know how to reproduce the issue. I'm going to let my system run for a few days without rebooting and see if it comes back, so I can take some pictures of the display.
-
yeah i hear that. shouldn't take days sadly or it'd be a manageable issue
just set power management to not turn off the display sleep, hibernate, and let it sit on the desktop while you sleep. good odds if its the same issue as the other few of us it will have happened by the time you get up.
wish there was a proper discussion similar to Diagnose video card problems by comparing with example corrupted screens
with more example cases somewhere out there. ah well.
edit: i realise i never asked, but you fellows did test using an external display to see that the artifacts are appearing there as well when they occur, right?
i have, and they are even less fun on a 40 inch 1080p display
if they don't show on an external display but do on the builtin, that would point to lcd cable or lcd. but its an important step to have gone through for details in any event. the sort of thing a support rep will ask you to do at some point, and given how hard it is to make it bug out when we WANT it to to test, easiest to do it next time it happens so you know the answer -
I do not have my G53SW hooked up to an external display, but I do have an un-used LCD monitor in the closet I can hook up next time that this happens.
-
waiting sucks. ;(
any news or new things you've noticed for you fellows?
my last running observation was that i tend to use a classic theme in win7, which means aero is disabled. i tested with a normal win7 theme with all the shiny transparencies and effects, and when it happened there were no dots, just a fixed image, black screen> fullbrightness> windows reset display box in middle of old fixed image of desktop. even mouse cursor was unmoving. ctrl alt del did nothing. numlock switched the light so wasn't a "hard" freeze. 4 more blank>error box notices phases then magic bsod screen.
guess the only reason i was able to actually see the problem was because i wasn't pushing it with aero 3d on the desktop before.
kind of makes me wonder if you guys were running classic theme or just about any other while you saw the problem?
--------------------------------------------------
just for informations sake i also thought it might potentially help to note manufacturing date for the machines, see if they all happen to be from the same month or something. i doubt we'd be so lucky, but hey.
04/2011 manufactured on the bottom sticker on mine. -
Absolutely nothing, my computer is rock stable since I applied my "fix", May be that I had another problem (but the same symptoms).
Aero is always enabled for me, and I got the greens and purple dots, not a static image.
I will inform you if I get a crash, or in a few weeks if it is still stable. -
I did get the driver crashing issue again. It crashed and recovered repeatedly. I waited to see if the corrupted screen/artifacts would appear, and they did not. System locked up after about 10 times of crashing/recovering. I shut it down and restarted and problem hasn't come back. This was about 4 days ago.
The message was:
Display driver stopped responding and has recovered. Display driver NVIDIA Windows Kernel Mode Driver, Version 285.38 stopped responding and has successfully recovered. -
hm tried the new 285.62 whql certified driverset. also installed nvidia system tools for the monitoring app.
upside less frequency of tdr events.
downside same issue persists otherwise.
it was rather amusing to me when instead of a tdr popup box, i got one saying opengl display driver failed to respond so nvidia system tools monitor has shutdown, error code 3, would you like to go to www.blahblahblah.. instead.
ah well.
anywho thank you for your inputs my fellows, and any news of recurrence or not. i figure even if it doesn't solve the problem it might help others down the road.
and the bsods when they happened were 00000116 codes. the fault pointer and last operations were largely irrelevant as the shutdown was always a 5 tdr in less than a minute bsod, so referenced the last thing windows was doing, not what triggered the last tdr event :/
also seem to have managed a no error occts gpu memory test cycle (10 pass) once.
and found a tool that claims it can access all gpu vram for testing from a bootable cd or floppy. vmt. written by a native russian speaker, but with some documentation in english. apparently the bootable version actually uses real vram memory addresses when reporting errors, unlike any windows tools which can only report address pointer addresses. meh bottom line if it finds a problem in a ram location in the bootable env it should be the real bad location, not possibly the same location being used several times over.
going to give it a spin soon and see what an overnight cycle gives me. -
http://forum.notebookreview.com/asus-gaming-notebook-forum/617234-g53sw-screen-issue.html
is this the same issue with you? -
.
with 3d games in fullscreen, or bink/other oddball game video codec intro videos in fullscreen usually blocky corruption. think sliding puzzle, with some blocks in the wrong place, and others solid colors shortly followed by a locking up of the displayed image and or the entire machine.
non fullscreen and the symptoms vary slightly. vlc which does NOT use the hardware video acceleration on the gpu may be large blocky corruption, solid green or purple or white or black when misbehaving, or lock the machine up/tdr.
3d like the spinning nvidia logo on the nvidia control panel has various issues to the 3d image window, may do the solid color thing or lock up the machine/tdr.
flash tends to vary in severity by browser and whether the app uses the hardware acceleration on the gpu, but typically either solid colors, or runs dirt slow compared to normal, with differing types of artifacts coming and going- usually either a wavy line pattern across the flash frame top to bottom of a solid color, or lines a couple pixels wide full of an analog tv static looking mix of color left to right at various heights in the frame.
at the same time as any of the non fullscreen items above we have dot patterns as seen in the image i linked, and other color corruption. ex for google chrome the middle of the unused tab space at the top has a darker blue or pinkish corruption line running from the open tab to the minimize/close/maximize buttons.
shrugs. i know its not a simple driver issue in my case as i saw it in expressgate and linux. can't even describe what it looked like in expressgate properly.
good to know your repaste seems to have fixed the issue you were having though- or does the post here mean your problem came back?? -
I forgot to say that this utility helped me investigating to the problem:
PowerMizer Manager | Some More Bytes
With it, you can lock the GPU to a defined power state.
May be that I will install Linux on this PC and not change clocks it to see if the problem reappear, but for now, cygwin is enough to my UNIX needs and I will be able tell if I get a good uptime in widows, I just rebooted after some OC tests and crash, I will use the computer normally (but with my custom clocks) starting from now. -
interesting, tried powermizer manager in one of my test attempts, did it claim the keys it was looking for weren't in the registry already for you when you first ran it? thought it was odd. wound up using the nvidia system tools package to keep the mid level clocks by setting them for windows desktop, etc. and only goes to the lowest stock speed if temperature exceeds a given value. kind of liked the rule configuration, then noted that just keeping the system tools monitor from that nvidia package running made the clocks stay at the highest stock state and went with that since i'd be running a thermal monitor anyway. two birds one stone.
the other reason to be careful is if the problem occurs while you are in safe mode for some reason. not to sure on the clock setting management software running in safe mode. additionally some of the recovery options would be vulnerable to bsod/etc from the issue as well which could be a real disaster.
-
The first time you run it, you must create the config (and redo that at each drivers update)
So now you are always running at the highest clock setting right ?
Then we will definitely be able to claim if this problem only happens with low clocks or not. -
already answered that. in my case it happens most frequently at low clocks, but has occured under the highest stock clock state during extended load conditions. forced high and ran f@h gpu core. repeatedly appears around 42-43,000/50000 progress on a work unit or around 4:20-4:43 minutes into that state. most consistently forceable result i've managed, but asus would just cry to contact f@h for a fix.
i reiterate it wasn't a thermal issue as temp logs indicate 83C as the peak temp for those specific tests, using three different nvidia driver sets, across both win7 x32 and win7 x64.
differential using the cpu only client shows stability across the same test setups.
still have to try the same stuff in a linux environment to nail home results, but my attempts to get a 64 bit debian install going have been failing. normally i never use anything but framebuffer video in linux, but to run a gpu core client and test requires i do extra work to set up the provided nvidia drivers.
if i can peg it down tot he same specific cases in the other os as well, it might prove useful as unices can be quite a bit more verbose about when something goes wrong, why, and how. -
I also have this problem. I have no OC on. I leave my Asus on 24/7 ... This happens when I plug into an external monitor (TV) via HDMI... Max temps 80-82C for GPU and 60-70C for CPU... I just received 2 more of these machines and I will test them also.
-
So the problem occurred again for me. But I forgot to test it on an external monitor. I was able to get a picture.
Megaltariak, I tried using Nvidia Inspector but m GPU clock slider is greyed out and I can't move it?
Also, you said your values are:
Performance Level 0: 50/101/135
Performance Level 1: 202/405/324
But my values are:
Performance Level 0: GPU 50/Memory Clock 135/Shader Clock 101
Are the last 2 values reversed? And how do I unlock the GPU clock slider? -
The value of the shader clock if always the double of the core one, changing the shader clock will change core clock too.
You need to have:
202mhz core (it is grayed out)
324mhz memory (use the slider)
405mhz shader (use the slider, the core clock will change too)
then apply
I hope that this will fix your problem like it does for me.
Since that the only crashes that I had are related to excessive Performance Level 2 overclocking (single crash and no artifacts) -
-
here's hoping it works for you. thats a very odd pattern much much larger blocks and favoring one side of the screen and taskbar. and can see the 285.62 latest drivers didn't make it go away for you either.
-
So far with Megaltariak's fix, I haven't had the problem come back (knock on wood). -
The problem just happened to me again and I had to hold down the power button to shut off the computer. I am not at home now so I don't have an external monitor to test it. However, when I ran Nvidia Inspector after it rebooted, I noticed my P0 values were back to the default. Did they get reset because of the lockup? Or did they somehow get reset, which caused the lockup?
-
All the values you enter in nvidia inspector reset after a driver crash or a reboot.
So I guess that my fix only works with mine
Personally I created a shortcut and moved it in the Startup folder so the new clocks get applied as soon as I open my session.
May be try it a last time:
-make the shortcut, test it ( it should apply the new clocks when you double click it), and move it into the Startup folder)
-poweroff the laptop
-remove the battery and leave it a minute
-reinsert the battery, boot it and immediately logon.
If the problem still persists, I'm out of idea for now. -
So you're saying at every reboot, the values reset? Because for days I left my computer running without a reboot, and my computer worked fine.
The problem most recently happened after I shutdown my computer, and if the values reset, then that is probably the cause. -
That's reassuring
Yes the values resets when you reboot the computer, that's why I advice to make a shortcut in the Startup folder. (If you do, check that it is correctly applied) -
It's been a month now and I never got theses artifacts since I applied my fix.
So I'm confirming that it definitively worked for me -
seems the issue for me is more in line with whats seen with other cards as per here: GPU in state where results are not reproducible! - NVIDIA Forums
which while related to the bit about failing under heavy double precision load - NVIDIA Forums is clearly not the same specific case. and of course this line of problem is at least marginally related to the class of problems seen The Nvidia 400/500 series lockup club! - NVIDIA Forums there and in a dozen similar threads.
first link was the most telling and similar, and is a problem appearing across the fermi generation chip, all the way into the tesla gpu computing line. wish your fix would change the outcome for me here megal, but no dice so far. -
I bump this thread to bring some bad news
After a few months of good working GPU ( with my fix) I recently had 3 crashes in a few days, exactly the same as before.
As this crashes seems only to happens on idle (at least for me), I've activated the 3D text screensaver to always keep my GPU busy, the GPU load is 13% at 521/1042/[email protected] and the temps never exceed 56°C which is good (the fans are noisy when >60°C)
If it doesn't work, I will disable powermizer and will consider RMA as 60° idle temps are not a real joy for ears. -
kimiraikkonen Notebook Evangelist
Maybe bad VRAM or faulty cabling (if GPU is re-mounted)?
-
I've not remounted the GPU, bad VRAM is likely but it only happens at low frequencies/when GPU is doing nothing.
So far no problems with overclocked low clocks/disabled PL0 (if someone is interested, I can explain)/3D screensaver but it is too early to tell if it's really stable, one thing I can say is that it NEVER happened when I was in front of the laptop.
That's by far the strangest hardware issue I've had, a GPU that is more stable when running games/furmark than sitting idle at the desktop. -
I did't read every post, but I had a problem something like this at one point with my G53SW-XN1.
I was using the Nvidia 285.xx driver at the time, updating to the 290.xx beta driver I have never seen the problem again.
I also was getting driver has quit responding with the 285 driver that I have no longer had since going to the 290.53 beta.
I tried the 295.51 beta on a couple other rigs but it had problems with folding@home and loss of work units. So I'm back to 209.53 on those desktops. -
Also, 60 deg. is not anything to be worried about at idle especially if you're running dual monitors and just an internet browser. Is your fan really loud? You might just try shutting down and blasting some bursts of compressed air in the vents. Even then, if it's just idling, the fans should only speed up for a few moments.
Edit: My apologies, I re-read your post. What is powermizer? And I'm pretty sure there is a program that would allow you to modify the fan profiles so that they don't get loud until, say, 65 deg. -
Powermizer is what changes the nvidia GPU clocks and voltage according to the actual usage, it is enabled by default (as part of the nvidia drivers I think)
You can disable it but you will be stuck either in low clocks (low temps but very bad performances and crashes for me), medium clocks (low temps but bad performances, or high clocks (good performances but raises the idle temps from 45-50°C to 60°C for me)
As for fan profiles, do you have more infos ? I am very interested by such program if they exist for G53 laptops.
Still no problem with the 3D screensaver, since the problem never occurred during usage (either games or GPU non-intensive work) , I guess it will be working as long as the hardware don't deteriorate by itself more badly. -
Hi guys, it seems to me (via experience and the hundreds of posts I've googled) that quite a few of these problems are stemming from known issues with the 280 series drivers.
Any driver I've tried (460m and 560m) earlier than the 280.xx series releases works fine.
The 295.51 betas are so far the only driver post 280 that I haven't had major trouble with.
Far fewer driver not responding errors after install, and disabling hardware acceleration in Firefox took care of the rest. The worst issue I have now is the occasional flickering.
Megaltariak - MSI afterburner will allow you to set multiple fan/overclock profiles normally, but for some reason that function isn't available to me, still working on sorting that out. -
MSIAfterburner may be the program that alters the fan profiles but I cannot remember. Unfortunately, I remember there being talk the 460m fan profiles are lockedI'm not positive though, hopefully someone can provide a fix.
-
Powermizer is exactly what you describe (3 power states)
I think also that fan profiles are hardcoded in the bios , maybe we can change it by modding it, but I'm not experimented with this and it is IMO too risky ( unless if there is someone very experimented here ) -
Ok, this morning everything was normal, the screensaver was still moving, no artifacts, I tap the touchpad, and immediate driver crashing and artifacts everywhere, time for RMA I think, I still don't get why this GPU can take 5 hours of continious intensive gaming without any problem, and crash on the blue immobile lockscreen of windows 7. (Never got this crashes suddently when I was in front of the laptop, only after a long idle period)
-
GPU in state where results are not reproducible! - NVIDIA Forums
failing under heavy double precision load - NVIDIA Forums
The Nvidia 400/500 series lockup club! - NVIDIA Forums
the first being the most closely related.
at some random point it loses its ability to track its memory. this results in EVERYTHING from 2d desktop graphics and window borders having color corruption, to flash, 3d, video overlays, and more all being corrupt to the point that tdr events occur.
gpu vram stress tests will also fail 90-100% of tested locations when in this state. but pass hours on hours of tests with no errors at all any other time.
it will persist through a restart. it will only abate on a power off, wait, power on. ie soft poweroff is not sufficient to recover the gpu.
and no, not a problem with just the 280 series. 275.33, 260.xx, windows default vga driver(and correspondingly bad resolution). linux drivers galore. 29x.xx series. most recently tested with 296.10 release.
this machine can for many, run f@h gpu cores with the default drivers, which are not 280 series. a machine like mine or his, cannot, without failing into unstable anywhere from 10 minutes, to 12 hours.
extensive testing has demonstrated the cause is one of three things: 1 a soft hardware fault, with intermittent occurence, irregardless of power state. 2 a problem with switching power states on the gpu/gpu vram resulting in undefined state until full power cycle. 3 a dying card.
i continue to feed information in the hopes they can work out the causes for reasons 1 and 2 right from the factory bypassing qa chip tests..
as an aside there is a tremendous amount of "maybe this fixes it" going about if you take the time to search it out(like any person experiencing this does inevitably).
some cases get solved by disabling the nvidia hd audio and all hdmi functions.
for some drivers influence their results i measurable degree.
some cases are highly specific to flash acceleration, and avoided by disabling it.
some cases boil down to trying new and ever more inventive/invasive ways to keep the clocks+voltage from fluxing.
in almost all cases eventually the card(if a desktop 400/500 series experiencing this) or the entire system gets rma'ed as the problem never fully abates, and often worsens with wear and time.
as i noted when this thread was going initially, a gpu that can start displaying color corruption in linux framebuffer mode, windows basic vga driver mode, and the linux quick boot environment that comes with this series. it's even gone funky color mode while running a livecd copy of memtest86 in TEXT MODE, and/or sitting in the bios menu.
for those experiencing this specific symptomata, the only constant is that the gpu will become unstable, at completely random times. the ugly nasty part is that noone has yet found a way to FORCE the corruption to occur through a specific action set or command series.
if it was bad drivers than a situation with hardware only control should be safe but isn't.
if it was simply power state switching than programmatically forcing rapid power state switching should increase error rate frequency or force the instability.
if it was heat, high heat should fault the chip. if it was low temp... etc etc.
but the reproducibility isn't there. when sending a machine in all you can do is say, disable power saving shutdowns, and let it sit idle overnight. check the next morning and voila, unstable gpu state until hard power cycle.
there are even some fellows who modded their gpu vbios for the 460m on this notebook model to force the high power state settings for all power states used by the chip. same problem persisted.
can see it occur, but can't find a way to force it either, and it passes tests, except when it doesn't.
almost through trying to find a way to hard force the error programmatically, some people are willing to pay for a software test to induce the failure on command to vet units with, but it's proving too deep a flaw. -
Just a question for anyone that have this problem: does you ever seen the GPU actually putting itself in this bad state when actually using the computer? By using it, I mean user interaction, not automated GPU load
Even in intensive usage (more than 8 hours sometimes), I never experienced the problem once, however, I've seen theses artifacts and crashes more than a dozen of times when opening the laptop after a night of idle (at the beginning only at low freqs, but now even at higher freqs and some load on it, see the few pages of this thread), like steelblueskies said, it gets worse over wear and time
For RMA, I called them and I need to take photos of the problem, (I should have took them before)
I hope that the problem will not disappear at this moment. -
to answer that, yes, but that is even harder to backtrace. most common with demanding applications that cycle high load>no/low load (loading screens)>high load.
tried simulating the effect with timered repetitive cycles of some benchmarks, but the times it occurs remain randomly distributed/ie no causal relationship other than load cycle variance being the trigger IF it is going to occur at a given moment.
in recent memory i've had it occur during a constant load situation once or twice, but this is much much much rarer outside of a cuda project(and again no heat difference to account for it, and the time it chooses to blow can be anywhere from 5 minutes to 15 hours after starting such an app continuously).
i really wish there were some specific conditions to cause it to occur on demand, but no dice.
already know the main symptom is vram corruption until hard power cycle, with the net result of all hardware acceleration failing, and all graphics/color draws exhibiting problems.
just be glad the g53sw uses a modular gpu daughterboard.
one other thought on analysis. there's an ic on the mainboard, upward facing side, positioned the same as the gpu daughterboard on the downward facing side. it has no thermal pad as near as i can tell, just a shiny finish on the chip package which will sit on the black plastic coating the chassis layer that you see if you remove the keyboard.
looking for more information on that ic. have a belief after looking at that, that the chip may be part of the irregularity of the fault, due to extremely bad thermal handling, and its position directly above the gpu on the mainboard. ie both it and the gpu get pressed hard, and the ic above the gpu can't vent heat(due to black plastic and no thermal transfer material into the aluminum sheeting/chassis) may be responsible(as it woul heat the mainboard directly under the gpu excessively, possibly faulting related power feed components).
*edit: incomplete point first time*: the idea of this point was what could be more variable than pressure contact affecting a nearby large ic's ability to vent heat into its nearest piece of chassis. we all know the keyboard flexes to one degree or another. so does the frame under it. if the ic is upward facing with no thermal transfer pad, varying the top surface pressure can vary its ability to conduct that heat into the intervening aluminum through the black plastic layer. could explain some of the randomness observed even when the conditions operationally and thermally(cpu/gpu term monitors) seem identical. ie could be a contributing factor. still more testing needed.*end edit*
would also explain the hard power off to clear instability. ic chip packages going unstable but not blowing getting into a latched state somewhere until a hard poweroff and enough time for the erroneous charge to bleed off.
meh. like i've said, i've been looking at the problem very very very hard.
ASUS G53SW-XN1 artifact problem on screen?
Discussion in 'ASUS Gaming Notebook Forum' started by volati1e, Oct 18, 2011.