Oh, sorreeee Mr. Cool.![]()
He said he got his ES QX9100, yes Q X9100, to 3.8Ghz. Your QX9300 had better overclock higher considering the QX9300 uses a later stepping that is way better at overclocking. And who cares if you have the highest clocked QX9300? Do you just like to toot your own horn? Why do you have to come in here being so arrogant and disrespectful?
-
this thread is disrespecting to an 920...so your even.
-
well what i meant was what was the highest clock when you have 8threads active w/ and w/o active cooling.
for both the 920/940 i see higher ghz but with hyperthreading disabled.
i wanted to find out what the highest stable ghz was with hyerpthreading enabled. -
really? who is running hyperthreading off? since you cant directly shut it off in bios
-
You can with certain Clevo versions. I have the option, but I've left it on for now. There's a couple dudes that run with it off at higher speeds than I'm comfortable with.
-
ahhh, ok. that makes sense then.
because you can use msconfig advanced options and set it to 4 and it ill turn off hyper threading. extreme cpus only it would seem. i tried it with my 330/430/620/720 and it did not turn it off. -
i think i can turn off my cores/threads too with msconfig start up.
anyways as johnny said most guys in the clevo/sager forums that have the highest clocks have ht turned off.
i was wondering what the highest stable clock was on all 8 threads -
cookinwitdiesel Retired Bencher
I don't normally "toot my own horn" this thread was just begging for an intervention. The OP's testing methodology and conclusion were so fundamentally flawed that something had to be done. And as John said, he is severely disrespecting the 920xm.
I really hate when people try to say an overclocked part is better than a stock but more expensive part acting like that cannot be comparably if not further overclocked. If he wants to feel happy that he got a decent performing chip that is fine, but do not go around trying to bash people with better hardware to make yourself feel good.
And its Cook, not Cool
-
that ht thing is a give or take situation depending on the situation.
for cpuz, you only need to run 1 core, where as wprim 155 you need to run as many threads to achieve faster times... i run between 3.6 and 3.7 for dail use...but then im not running my chip at 100 perent on all 4 cores times two either. (have desktop machines for that) -
Hrm... I wonder if the extra circuitry actually powers down, or if it's just sitting in idle using msconfig?
-
cookinwitdiesel Retired Bencher
HT does not use extra circuitry...it just fills out the same pipelines that are already there maximizing the core utilizations
-
My mistake, I thought there were extra pipes between the ALU and FPU to enable it.
-
cookinwitdiesel Retired Bencher
As far as I know (and I could be wrong) HyperThreading just fills any gaps in the pipelines pretty much forcing the CPU to 100% utilization when ever there is a workload. That is why HT causes more heat, the cpu has less/no downtime between execution of instructions
But that is using the knowledge I learned about general processors in my undergrad Computer Design class.....Intel is most likely a little more advanced than what I was learning haha -
really makes no difference. the os isn't using it nor does it see it.
but performance test (passmark 7) acts like it wont run if you dont enable the missing cores it's looking for. although i got it to run once, but that was it. -
The original 2002 hyperthreading implementation in the P4 was listed as requiring 5% more die area for a 15-30% performance benefit. And while I believe your understanding of Hyperthreading is (essentially) correct, you also have to remember that to do that, you'll need some extra circuitry; specifically, registers to store the execution thread data.
-
cookinwitdiesel Retired Bencher
That is true, as I mentioned, I have a very elementary understanding haha (at least on the silicon level) -
bottom line...who cares...lol
a qx9100 does not outrun a 920xm
and a qx9300 has the 920xm by like 20 mhz in max speed at 4.33 ghz and the 920xm at like 4.313 ghz -
This helped me understand HT:
width='480' height="385"><param name="movie" value="http://www.youtube.com/v/7c3CfJe_6kQ?fs=1&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/7c3CfJe_6kQ?fs=1&hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width='480' height="385"></embed></object>Last edited by a moderator: May 6, 2015 -
I really hope it didn't! That video alone might be at least partially responsible for why so many people think their dual core is like a quad core. That and the task manager.
All hyperthreading does is allow for more efficient utilization of the execution units that are already there. When your processor shows a load of 100%, that doesn't mean that every execution unit is being used fully, it means that all clock cycles are being used. In some cases though it can show a decrease in performance. At best, Intel says it will increase performance by 30%. There is a correlation between performance increase of hyperthreading and how parallelizable the software is.
To go in depth, look here:
Performance Insights to Intel® Hyper-Threading Technology - Intel® Software Network
How to Determine the Effectiveness of Hyper-Threading Technology with an Application - Intel® Software NetworkLast edited by a moderator: May 6, 2015 -
cookinwitdiesel Retired Bencher
Have to agree with trottel there.....should have shown some like all the blues going in and like 1 in 3 yellows. Since the "2nd" thread for each core is really just using the leftover space in the pipeline and not a full independent datapath
-
It isn't quite like that either. They should be leaving in the same ratio they are going in since neither is prioritized over the other, just maybe the total number of balls going through being 30% higher. If one was just using the leftovers of the other, it would lead to all sorts of problems.
I remember the original Intel demonstration when they first showed the world their hyperthreaded P4. It was even worse than the video tHE j0KER posted. They had two identical computers with P4 processors side by side, one with HT, one without. They were ostensibly running the same software and had tons of windows and applications running at the same time. The computer without HT was barely crawling and moving slower than molasses. The computer with HT was zipping through everything like you wouldn't believe. There were lots of oohs and aahs in the audience followed by lots of clapping. Funny how one computer was running 10x faster than the other even though Intel themselves say that HT could give them up to a 30% performance boost. -
cookinwitdiesel Retired Bencher
And the P4 hyperthreading was AWFUL compared to the nehalem hyperthreading....
Whichever thread is primarily on the "real" core and not the hyperthreaded one will perform better though, this has been shown multiple times. The hyperthreading only is worth using if all of the physical cores are already occupied -
Well, that's also an OS processor scheduling limit, too. You need an OS that's "aware" of which is a logical processor and which is a "real" processor to have threads scheduled appropriately... which Windows 7 is supposed to be.
-
It's the same. Only the software is different.
Do you have a link to one of these tests? I'd like to take a look.
And there is no real and fake core. -
there is a physical and virtual one the physical has acces to all the power of the pipeline while the other must try to fill the gap left by the first one
thus the physical is always way superior to the virtual one
and on the p4 well netburst as a whole was awfull -
niffcreature ex computer dyke
The information (at the beginning of the thread) is not surprising to me. Along the same lines as my overclocked Quadro FX 3700m 65nm outrunning a gtx 280m 55nm at stock.
Because of the way the chips are manufactured and branded, this will always be true. And I'd bet there are qx9100 that will beat the overclocking limit of a few 920XM
What would interest me more is an entire architecture that overclocks so badly that the older architecture will beat it when both are pushed to the limit.
I'd also bet there gtx 280m that wont run at 600/1500mhz which makes me happy
-
cookinwitdiesel Retired Bencher
I would bet that EVERY GTX280m can run at 600/1500.....most can do 650/1625.....I was able to go up to 680/1700 with mine when I had them
And for the HT tests, I will do some tonight for you on my desktop. I will do 2 real cores vs 1 real 1 HT and show you the differences. I will also play with affinity to do a single threaded app (SuperPi) on both a real and a HT core. -
No way Jose. Where do you get your info from?
A processor with hyperthreading duplicates its transistors that hold the architecture state, what shows up as two processors to the OS, each with their own interrupt controller. These two logical cores are identical and are equally tied into and share all other processor resources. The two logical cores share those processor resources equally. If both logical cores are requesting the same resources, they are going to share. One is not the superior that can use as many execution resources as it wants at all times and the other is not inferior that only uses execution units that are not being used by the other. They intelligently share the load so that if you run the same thing on both, they will take just as long to do the work, not one finish war faster than the other.
If what you say is true, do you have any idea how much more difficult it would be to effectively manage and balance workloads than it already is? -
This actually happened when the first P4 came out. The 180nm Coppermine PIII could outperform the 180nm Willamette P4 when overclocked. When the 130nm processors came out Intel had the Tualatin PIII, and the Northwood P4, the Tualatin was easily the equal of the Northwood, if not better. Intel axed the Tualatin quickly and further developed the Northwood, which got better with later steppings. While Intel was proclaiming that Netburst could scale well to 10Ghz, they had their design team in Israel work on developing the Tualatin into the Core architecture, the "grandfather" of Intel's current processors.
-
I might as well follow this up by posting some overclocking results for my QX9300.
At 3.33GHz (stable):
At 3.49GHz (not so stable):
At 3.63GHz (insane):
-
the processors actually duplicate the physical transistors thru the cpu's software/firmware?
link please. -
so theoretically, if a c2q and an i7 xm could hit the same max ghz and the cpu's were pushed to 100%, they would perform the same? - since the i7 would not have enough headroom to enable hyperthreading if it operated at 100%?
-
damn 5482741!
those are some great numbers there! -
No, I mean there are two complete sets of areas for holding the architectural state and interrupt controller in each processor core.
No. The architecture of the Core 2 and Nehalem cannot be compared 1:1 like that.
Also 100% usage does not mean 100% usage. The usage % you see in the task manager is not taking into account the actual usage of the execution units. That is where hyperthreading comes in, to take advantage of, or at least try to, of any processing power that would otherwise not be able to be used. -
Wikipedia's article on hyper-threading is a decent primer. What it comes down to, really, is that if you're going to execute 2 threads simultaneously on the same core, you need a way to save the state of one of the threads while you're running the other one, which is why you need to have enough storage registers for each thread. This allows the hyperthreading to be relatively "transparent" to the OS, which can then believe that it can simply schedule 2 threads at the same time, as the CPU will be handling the split-up execution of both threads. I say "relatively" because it helps if the OS is "smart" enough to not assign 2 threads to 2 logical processors that are attached to the same core, if there's a logical processor attached to a currently unused core that's free.
Well, the i7 also has a different micro-architecture that also makes it better performing, which makes the comparison difficult. It's a little easier to compare by going back to the old P4 days, as mentioned in the later part of the Wikipedia link above. That one shows that with with well-threaded code, a P4 with hyperthreading would perform 15%-30% better than the same P4 without hyperthreading, with only a 5% increase in die area for the required extra storage area and resources. This is, naturally, a best-case scenario. Let's assume the performance numbers remain about the same for Intel's implementation of hyper-threading in the i-cores (completely unsupported, but let's use this for the sake of argument).
This would mean that in the best case, an i-core like, say, the i7-820QM would run 15%-30% faster through a well-threaded program than a (theoretical) i7-820QM that had no hyper-threading (or possibly an i7-820QM with hyper-threading turned off). It's still not as good as an (theoretical) actual 8-core processor (without hyper-threading) running at i7-820QM speeds, (which would be more like 75%-90% faster, although at the cost of a probably close to double die size and all the heat and problems that entails), but it's an easier design upgrade than an all-out full core.
Remember, too, that except for the simplest continuous threads, there will usually be "breaks" in the thread where the thread is stalled for various reasons, be it the result of an I/O read, waiting on the result of another thread, user input, etc. It's these times when the processor is thus not being utilized that the other thread is run.
This, by the way, also explains the a lot of the practical performance differences between the Arrandales and the Clarksfields; for most programs that are not very well multi-threaded (this is not quite the same as the number of cores they use), the Arrandales will obviously win out with their faster speeds. But when you stress them, you find out that the hyper-threading is not as good as "real", even if slower, cores. Here are some real-world numbers from an 8740w owner that tested an i7-620M and an i7-720M in the same machine. Note how the i7-720QM is more than twice as fast when it comes to the renders and exports. -
great explanation with good links thanks +1
-
TDP is a specification and therefore does not change. By overclocking the processor power can exceed TDP and operate out of specification but the TDP itself remains the same.
Hardware modding should be able to do this.
And as mentioned before, it's not always about clock speed. The newer architecture may also bring new instructions that can be used to produce a much faster algorithm for software on top of any improvements in micro op throughput.
Last time I looked taskmanager didn't appear to show a CPU work load at all but showed a scheduling load. ie taskmanager could show 100% usage but the CPU could be active for less than 10% of the time. An example might be using clock modulation set to 50%. Running Linpack, taskmanager would continue to show 100% even though the processor is inactive ~50% of the time. Another example with Linpack would be running it on a quad with HT but only running 4 threads on separate physical cores. Even though the CPU would be at ~100% load taskmanager will show ~50% just because there is little scheduled on the other 4 logical cores. -
Perhaps more importantly, Intel's TDPs are listed for a specific voltage and processor load. Thus while the listed TDP may not change, the "actual" thermal load of the processor will, as load and voltage do. But that's just getting nitpicky.
-
cookinwitdiesel Retired Bencher
People have yet to hardware mod more voltage to an i7 cpu in a laptop. And as for the TDP remark, technically you are right, I meant thermal load as Judicator mentioned. The main point though was that in raising the TDP using ThrottleStop you do the same thing as raising the Voltage, you are altering the power thresholds of the chip as laid down by the system bios.
-
Not really sure what your trying to say since all I can make of it is that load varies, which is pretty obvious anyway.
To try and make things a little clearer my understanding has been that "TDP is defined as the worst-case power dissipated by the processor while executing publicly available software under normal operating conditions, at nominal voltages that meet the load line specifications" or something along those lines.
In other words the manufacturers should design around that specification, the key word being the "D" in T DP, design and by doing so everything should work as expected as long as the CPU is operated within it's design limits.
A CPU may be designed with a TDP of 35W, just because you run it at less than that or overclock and exceed it, doesn't change the fact the TDP is still 35W.
As for throttlestop, you are not changing the TDP but the limits at which turbo throttling occurs. For instance a CPU with a 55W TDP and programmable TDP/TDC limits could have the limits increased so turbo throttling didn't occur at 55W but at 65W. The TDP still is 55W however as that is the designed specification. In the same way if Vcore is specified as 1.5V max and you run it at 1.8V it doesn't mean the Vcore specification changed, just means you exceeded it. -
cookinwitdiesel Retired Bencher
TDP is the most power that the chip will possibly consume while under the Intel specified operation parameters (voltage). It is for this number that a heatsink in a laptop is typically designed. With ThrottleStop, when you raise the TDP setting, you are allowing the chip to operate at a higher power level than the Intel specified TDP
If you do not believe me, get a kill-a-watt and look at the difference in power consumption with and without ThrottleStop. People have measured as high as 120W being consumed by the CPU alone when using ThrottleStop. -
lets add another intel video for turbo/hyper threading
Intel® Processor Technologies
because it seems..according to them...it does work as the previous video posted.
regarding tdp/tdc/temp
ps:
i have had mine as high as 215 watts on that meter
when running the vantage cpu test 1
or as low as 1x..in this shot...3x with ram running at 25 mhz..haha
-
It's actually an average of the power draw of various processor intensive software packages, with probably a little extra added on. The thing is, it's quite possible to go over the TDP because it is an average, not the absolute worst case. It won't happen often, so Intel is still fairly safe in putting out that number, but there are many times I wish it was indeed a worst-case scenario, or, at least, that they also put out worst-case numbers, sort of like what AMD does with listing both ACP and TDP.
-
that's why amd chips burned up so fast. they use to give the worse case scenario...as far as i could tell...lol (back in the day)(they are far better now)
slightest bit of over heating and that was it. chip burned up and you had to go get another.
but enough about this tdp/tdc...if you dont have a chip capable of finding out...no sense in arguing it out. those of us with them..already know what the end result is...
-
Really, what are you talking about?
-
you'll have to google it. not about to go into all that like you guys and this tdp. im good.
but if you were a bencher...then you would know already. -
You are either making stuff, greatly embellishing the truth, or a lot of both. I guarantee it. And I know this because before buying my first Intel system in late 2008, I was running through AMD hardware like there was no tomorrow.
-
i work on computers for a living...i highly doubt it.
been doing this since 1992 -
moral hazard Notebook Nobel Laureate
One guy did it for a GX740 (though he had an i5 installed):
Google Translate
I'm sure someone has done the same with an i7, not everyone posts everything they do on the net (and in english). -
frolly did an i7 on a gx740
but it may be irrelevant when you run a 920...
he would run into the same problems as i am now.
101 error...needs voltage
http://www.hwbot.org/community/submission/1054246_follyman_cpu_z_core_i7_q720m_3702.4_mhz
SEARCH RESULTS:
CPU-Z - 3702.4 mhz - follyman (no team) - (Intel Core i7 Q720M @3702.4MHz)
2 points (2)
CPU-Z - 3518.67 mhz - johnksss (EVGA Enthusiasts) - (Intel Core i7 Q720M @3518.7MHz)
1.5 points
CPU-Z - 3406.38 mhz - DR650SE (EVGA Enthusiasts) - (Intel Core i7 Q720M @3406.4MHz)
1 points
CPU-Z - 3378.25 mhz - ty_ger07 (EVGA Enthusiasts) - (Intel Core i7 Q720M @3378.2MHz)
0.8 points (1)
QX9100 outruns 920XM
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Raidriar, Oct 27, 2010.