I do agree though. Im still trying to get a handle on all the variables. At the time I was thinking along the lines of furmark running type thing.
So yea I get your point, and it is a valid concern. Maybe I'm just worried that a 130watt PSU cannot be done for some reason ( not that Dell tells us much) and hoping that those 90watt adapters were flawed and somehow mine will be able to lol.
Wishful thinking, Just like anyone else I just want one that works. This laptop would be almost the Holy Grail if it were to work as advertised.
-
Intel manages to build immensely complicated processors that can run tight code without burning themselves out. The RAM in my laptop manages to get through memory stress tests without burning out due to a "read/write pattern virus"'
I think Prime95 + Furmark is a pretty insane test and using it as a representation of what you can expect the laptop to manage for 12 hours at a time is a little extreme. Having said that, Dell has been extremely obtuse in acknowledging any issue at all and its only through such tests and the availability of new tools like throttlestop that this issue is even provable.
My 820QM + RGBLED throttles with just Prime95 running 8 threads of blended tests and that is a problem since my cpu is in fact limited by throttling when I'm not even using the machine for "multimedia" or whatever climb-down term Dell is now using to use to tell us it was never meant for gaming.
I want to be able to compress video, compile computer code and run virtual machines and that sure as hell *will* cause throttling. What happens when the video compression programs start supporting the use of the GPU? -
There are situations that ATI could not predict, and Furmark is one of them. It pushes every pipeline of the GPU at once, which was never intended and uses way more power then expect. That's why they call it a power virus. The term does suit the program.
Just for the record, my RGB+720 also throttles with just prime 95 if the brightness is turned up very far. Which is of course unacceptable.
Still we want the same thing. -
I'm still getting this creepy feeling that dell has cut far too many corners here to lower costs. Throttling is one thing but my 1645 took 6 sec from i pressed the power button before it even made a noise and started booting up, and my headphone jack suddenly stopped working and the internal speakers kicked in every now and then. In addition the RGBLED had some serious light leakage covering about 20% of the screen on both sides. Is it possible that this is all a result of dell using cheap and under performing components that simply can't handle the power required to run the i7s and the 4670s at full power? Is it even a way to prove it if that really is the case?
-
A test with a 210W brick has already been done. Search this thread, it's there.
~Ibrahim~ -
I think testing should stop. Everything should just stop. I am so angry at Dell.
Here's sort of an update...The 1645 went back, and ordered the Lenovo W510. It's got pretty much the same specs except for one interesting difference: 135W power adapter. I have always owned Dell, and I'm actually a bit upset that I ended up returning the 1645 as it was a system that I REALLY wanted to like. On paper it had everything I ever wanted. However, at the end of the day, sticking with Dell in this situation is just too obviously not the right thing to do. For those of you sticking around, I wish you guys the best of luck in getting Dell to fix these problems. I'll definitely consider the vendor in future purchases, but as of now, they have lost a customer. I'll be hanging around the Lenovo sub-forum now...Cya guys!
L. -
Good for you.
(no sarcasm)
~Ibrahim~ -
Thinking of grabbing this laptop. http://www.amazon.ca/gp/product/B002FU6IQI/ref=ox_ya_oh_product
$1399 + tax - $300 MIR seems like a pretty good deal. And at half the cost of the XPS 1645. Yes, it has lower specs but it will definitely outperform my 1645 at this point. Hell, even my 5 year old PC can play some of the games my XPS can't. I am definitely never buying Dell again and will be telling everyone I know to avoid it at all costs. -
-
I dont know about you guys, the the dvd drive mine had on the first 1645 didnt work at all. The second one would only work half the time. I could literally put in the same disc, sometimes it would go, other times it would hang to oblivion. WLED screen i had was perfect, the second 1645 i received looked awful.
it was a bit frustrating.
The asus had a i7 version of that laptop, but it has since disappeared it looks like. Its been sold out for a long time. It was 1500 with a razer mouse, and a backpack. -
"Hey guys, how much power do you think this thing needs? "
"Oh man, its *impossible* to predict, there's just no way to imagine what the maximum might be..."
Gfx cards are invariably run to their limits in the real world. I'm not a gamer but if I were, I might be more worried about this than anything else. What if these cards end up being like the NVIDIA GeForce 8600GT that was burning out in MacBook Procs recently and which could not be fixed even with a replacement because the underlying problem was a design flaw?
-
And if you are worried about it, why, How long can you run your car engine at max power before it blows up? Not very long at all, in fact it would be hard for your car engine to reach and sustain max power and not possible for any real world use. Do you worry about that? No. Because it is the nature of the beast.
GPU's and CPU's are very different, EACH CORE of a CPU can only run 2 threads, a core of a GPU can run in to the thousands. 2 vastly different designs and should not be compared anymore then a car engine to a GPU.
Every GPU released operates this way, it is not just the GPU in this laptop, it's the nature of the beast and why Furmark should be avoided. -
Mitchell2.24v Notebook Evangelist
The design process for any processor is likely quite complex and simulations will probably be SOP. Therefore I cannot believe ATI or NVidia being unable to predict the results of FurMark running. They might even be able to run FurMark on a virtual or simulated GPU. How else are they going to design the thing? -
That car example is horrible lol.
but correct me if im wrong, power virus is a coined ATI term, not sure nvidia even cares. And its fault of their own as they dont provide any protection on there gpu's like intel does on their cpus. They do now with the 5000 series that have this hardware protection, but with the 4000 series didnt so they coded stuff in their drivers to x out furmark.
The only way your going to damage a car engine, is if the numerous failsafes programmed in cars these days fail. They have rev limiters, you can stand on the peddle all day and it wont blow. If they start overheating, theyll dump more fuel, pull timing etc or shutdown. The 4000 series didnt have those failsafes and under very prolong use of furmark damaged them.
It is not fault of furmark, they dont have any evil intentions of killing graphics cards. Its like our throttling issue, with underrated parts limiting performance. Except they dont have throttling so their stuff dies after a while. -
Possible damage to my gfx card was just about the last thing I was thinking about when I downloaded a benchmarking program. When I first heard the term "virus" in connection to furmark I thought that meant it contained a trojan!
On a somewhat off topic note, I think this thread isn't getting the attention that it deserves:
http://forum.notebookreview.com/showthread.php?t=451601
Buddy bricked his 1645 with a bios update and found the right combination of steps to bring it back to life. The package he built should be downloaded by all "just in case". -
They may have known this was possible, perhaps thats why they implemented throttling. But again just because you can run a torture test that can break something doesn't mean its not up to par. The problem is the way Furmark instantly pushes each part of the GPU. If you don't believe it run Furmark, a lot of people do and never have problems, some have burnt out there GPUs. If you need a GPU that won't throttle and won't burn up, you have no options.
You have to take a product for what it is, somethings you can expect to be able to run at 100% 100% of the time, like a CPU, GPU's are different and if you tired that you would burn it out fairly fast. It's weather it suits its purpose or not. These GPU's can last for many years but if you try to run Furmark you run the risk of burning it, and almost certainly will shorten its life span.
Just for the record, I have heard of programs that can burn up CPU's or cause them to throttle.
You know if you really want to know why ask ATI. But ATI calls it a power virus for a reason. People have burnt out GPU's from both ATI and NVidia running it. It is a pointless test that can only be produced by that test. -
-
As to the other point Nvidia has also stated the use of Furmark is not recommended and Nvidia GPU's have died because of it. -
Furmark is software albeit an unrealistic piece of software it is still only that. If a piece of hardware fails from running some software it obviously wasn't ready to leave the factory. I also think this whole argument over the validity/safety of furmark is irrelevant.
-
Look I don't know why you guys are being so anal about this. No GPU can run Furmark safely, deal with it. -
-
-
2. Overclocking software directly manipulates/changes the hardware. Which is different from USING the hardware. -
If you don't like it, or want to argue until the cows come home why it shouldn't be, well by all means.
But it is that way whether you, me, or anyone else likes it. -
@SlyNine
I don't know why you are arguing against Furmark, i am using it in addition to Prime95 to test Dv6t with i7. Under both programs, the system works fine and the throttling happens only when the temperature of the system reaches 100C after 15 or 20 minutes under battery or AC. This is the way throttling should work not because DELL would not provide an adequate charger. -
"Believe it or not electronics aren't supposed to have every one of their circuits maxed all at the same time. That's not a design "flaw"."
Cannot remember where I read this, but it is true. -
I dont know think many people here are claiming furmark is anywhere near realistic use. In fact i think you have mis read this.
Furmark has been essential in the early investigation, and still is to validate earlier findings. What i mean is, It is consistent enough to really examine the throttling schemes implemented by dell. Games cant do that. We can force multipliers via throttlestop, and see the tipping point when the bios starts clock modulation. We can force a multitude of multipliers/modulation. And observe the times it takes to return to normal operation, the frequency the bios updates etc. Very invaluable
Now if it is risky, and others decide to use furmark. That is their business/risk to take. -
Actually, it is specially standard for computer manufacturers, the more burning test the system passes the more reliable it is. It exposes inadequate thermal paste application, inadequate thermal heat dissipation system and similar problems.
In addition, when a chipset is design for a 100C max operation temperature, it means that it can withstand more say 110-120c, but for safety it is set to a smaller value to guarantee stability. -
I repeat, don't use Furmark.
Don't use Furmark.
Do NOT use Furmark.
Don't use Furmark.
Do you see how irritating that gets after a few times? How many more pages of that are you going to post Mr. Plant?
Now, tell us a repeatable, acceptable, unbiased, effective benchmark that everyone can use. We are all listening.
As for cars, you don't know what you are talking about. Any car engine can run all day, for weeks at a time under full load. If it overheats, that is a design failure, as the cooling system is not up to task. Similarly, I can bounce my motorcycle engine off the rev limiter while doing a burnout until the tire explodes and nothing bad happens to the engine. The rev limiter is set at 10,500 RPM. It can turn 10,500 RPM forever and it will eventually wear out, like any other mechanical system, but it won't be because of the load. You see, the engineers know that it will run forever at 10,500 RPM. They know that at 11,500 RPM the valves will float and bad things will happen. They use this information to set the limiter at 10,500. This is like the Ghz rating of any Intel chip. Intel knows its safe and repeatable limit, adds a margin of safety cushion, and sets the speed as such.
If my vehicle motor starts to heat up, the thermostat opens up and lets water circulate through the radiator. If the overheat is severe, the computer will begin disabling the fuel to two of the eight cylinders, then a different two, etc. to let them cool. This is EXACTLY like Intel having a built in, hardware based, temperature sensing throttling system. If you overheat the CPU, it slows itself down. Notice, the limit is heat, not some arbitrary wattage.
You keep telling people to set their GPU to it's slowest settings to run games. ? The whole point of a "performance" setting is to get performance. Why should anyone have to disable that? Do you have a block of wood under your accelerator peddle so you do not accidentally use all of the performance of your car? When you do something that loads the GPU, you want it's performance set to high, so that it does that task better. Do you wear out of focus glasses so you don't accidentally use all of your vision?
Cars also have some other fail-safe mechanisms built in, just like laptops. If you put bad gasoline in them, the knock sensor alerts the computer which retards the timing to protect the engine.
In this scenario, you are using fuel that does not meet minimum standards ... for a computer that might mean a power supply that does not put out enough wattage. You don't retard the timing of EVERY vehicle so that they can run on sub-standard gasoline, do you? No, they would pollute more and make less power. Along those lines, you don't handicap EVERY laptop to the level of an inadequate power supply. At the worse, you offer different performance based on the detected wattage available. In this case, that would mean different limits, based on the detected supply ... just like different timing based on the fuel quality.
Dell power supplies all send an identification signal down the center conductor which the BIOS uses to identify them. So ... the BIOS knows what wattage is available. If it can't detect a signal, it won't charge the battery and it runs the system as if it has only 65 watts available. I mention this to counter another invalid point I expect you to make.
Back to my point; provide an alternative testing method that meets normal scientific standards. That means a fair test that produces repeatable results.
Just in case that needs repeating, stop complaining about the test and provide a better one, or STFU about it.
BS. That is not true at all. All electronic and mechanical systems are designed with a certain amount of safety margin built in. EVERY piece is better than the absolute minimum required. They are designed to be used. I should never use my laptop, so it will last forever?
Where do you get this crap? You clearly do not design anything mechanical nor electrical. As such, I suggest you top telling us what you THINK engineers do and how you THINK things work. -
how about. Oh but it should
Oh but it should
Oh but it should
Oh but it should
Oh but it should
. See how pointless that is.
If you believe that leave your amp on full blast, turn your car on and leave the throttle down all the way, turn all your stereos up. Turn everything you have as far as it'll go and make something to push it as hard as you can and see howlong stuff lasts. -
Rather than address anything that he said, you deflected and only made your position seem even weaker. Priceless.
-
Unreasnbl should change his/her name to reasnbl lol that made total sense and I really don't see why this is continuing.
-
Maybe you will come down to reality and see things are built to serve a purpose. Going beyond that purpose just to see if the safety mechanism is adequate for any situation is a waste of time and resources. AMD has said, FurMark CAN HARM YOUR GPU. Why are you ignoring the makers of your GPU.
My car puts out 303 HP, if I had it pushing 303 HP for about 12 min it would die, if you did this to your car IT WOULD DIE, ask some real engineers people. They will tell you, IT IS NOT DESIGNED TO DO THAT.. -
Actually like someone above said and I know this because I spent some time as a computer engineering major, the advertised limit is NOT the limit. It is a safety cushion to ensure that the limit is never reached. Therefore stress testing (excluding overclocking) should be same as long as it's done within reasonably (I wouldn't run furmark for 48 hours or something ridiculous like that).
-
-
Maybe this situation is to unique to be compared. But regardless, AMD has suggested Furmark can hurt your 4670s.
So why do people insist that it shouldn't or can't?I can at least understand the arguments of why AMD should release something that cannot harm its self(well sort of, until I put it that way).
But when people start saying they can push there cars to the max, have their amplifiers up as far as they will go, push anything and everything to the max and it should harm it. Dude, COMON. If you are a computer engineer, you should know how foolish that is. -
This is why I am saying we should not care about, or test with Furmark. We have pre 5xxx cards.
The problem still exists, why exaggerate it with synthetic benchmarks that are potential harmful to our computers. -
-
-
Looks like the thread has deteriorated into bickering, and since there is an updated thread here, I am closing this one.
Also, remember that NBR has a ignore user feature located in the Usercp...some of you guys may want to look into it...
S-XPS 1645 AC Power Throttle Issue Investigation
Discussion in 'Dell XPS and Studio XPS' started by Zlog, Nov 26, 2009.