Oh, sorreeee Mr. Cool.![]()
He said he got his ES QX9100, yes Q X9100, to 3.8Ghz. Your QX9300 had better overclock higher considering the QX9300 uses a later stepping that is way better at overclocking. And who cares if you have the highest clocked QX9300? Do you just like to toot your own horn? Why do you have to come in here being so arrogant and disrespectful?
-
this thread is disrespecting to an 920...so your even.
-
for both the 920/940 i see higher ghz but with hyperthreading disabled.
i wanted to find out what the highest stable ghz was with hyerpthreading enabled. -
-
-
ahhh, ok. that makes sense then.
because you can use msconfig advanced options and set it to 4 and it ill turn off hyper threading. extreme cpus only it would seem. i tried it with my 330/430/620/720 and it did not turn it off. -
i think i can turn off my cores/threads too with msconfig start up.
anyways as johnny said most guys in the clevo/sager forums that have the highest clocks have ht turned off.
i was wondering what the highest stable clock was on all 8 threads -
cookinwitdiesel Retired Bencher
I really hate when people try to say an overclocked part is better than a stock but more expensive part acting like that cannot be comparably if not further overclocked. If he wants to feel happy that he got a decent performing chip that is fine, but do not go around trying to bash people with better hardware to make yourself feel good.
And its Cook, not Cool -
for cpuz, you only need to run 1 core, where as wprim 155 you need to run as many threads to achieve faster times... i run between 3.6 and 3.7 for dail use...but then im not running my chip at 100 perent on all 4 cores times two either. (have desktop machines for that) -
-
cookinwitdiesel Retired Bencher
HT does not use extra circuitry...it just fills out the same pipelines that are already there maximizing the core utilizations
-
-
cookinwitdiesel Retired Bencher
As far as I know (and I could be wrong) HyperThreading just fills any gaps in the pipelines pretty much forcing the CPU to 100% utilization when ever there is a workload. That is why HT causes more heat, the cpu has less/no downtime between execution of instructions
But that is using the knowledge I learned about general processors in my undergrad Computer Design class.....Intel is most likely a little more advanced than what I was learning haha -
really makes no difference. the os isn't using it nor does it see it.
but performance test (passmark 7) acts like it wont run if you dont enable the missing cores it's looking for. although i got it to run once, but that was it. -
-
cookinwitdiesel Retired Bencher
-
bottom line...who cares...lol
a qx9100 does not outrun a 920xm
and a qx9300 has the 920xm by like 20 mhz in max speed at 4.33 ghz and the 920xm at like 4.313 ghz -
This helped me understand HT:
width='480' height="385"><param name="movie" value="http://www.youtube.com/v/7c3CfJe_6kQ?fs=1&hl=en_US"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/7c3CfJe_6kQ?fs=1&hl=en_US" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width='480' height="385"></embed></object>Last edited by a moderator: May 6, 2015 -
All hyperthreading does is allow for more efficient utilization of the execution units that are already there. When your processor shows a load of 100%, that doesn't mean that every execution unit is being used fully, it means that all clock cycles are being used. In some cases though it can show a decrease in performance. At best, Intel says it will increase performance by 30%. There is a correlation between performance increase of hyperthreading and how parallelizable the software is.
To go in depth, look here:
Performance Insights to Intel® Hyper-Threading Technology - Intel® Software Network
How to Determine the Effectiveness of Hyper-Threading Technology with an Application - Intel® Software NetworkLast edited by a moderator: May 6, 2015 -
cookinwitdiesel Retired Bencher
Have to agree with trottel there.....should have shown some like all the blues going in and like 1 in 3 yellows. Since the "2nd" thread for each core is really just using the leftover space in the pipeline and not a full independent datapath
-
I remember the original Intel demonstration when they first showed the world their hyperthreaded P4. It was even worse than the video tHE j0KER posted. They had two identical computers with P4 processors side by side, one with HT, one without. They were ostensibly running the same software and had tons of windows and applications running at the same time. The computer without HT was barely crawling and moving slower than molasses. The computer with HT was zipping through everything like you wouldn't believe. There were lots of oohs and aahs in the audience followed by lots of clapping. Funny how one computer was running 10x faster than the other even though Intel themselves say that HT could give them up to a 30% performance boost. -
cookinwitdiesel Retired Bencher
And the P4 hyperthreading was AWFUL compared to the nehalem hyperthreading....
Whichever thread is primarily on the "real" core and not the hyperthreaded one will perform better though, this has been shown multiple times. The hyperthreading only is worth using if all of the physical cores are already occupied -
Well, that's also an OS processor scheduling limit, too. You need an OS that's "aware" of which is a logical processor and which is a "real" processor to have threads scheduled appropriately... which Windows 7 is supposed to be.
-
And there is no real and fake core. -
there is a physical and virtual one the physical has acces to all the power of the pipeline while the other must try to fill the gap left by the first one
thus the physical is always way superior to the virtual one
and on the p4 well netburst as a whole was awfull -
niffcreature ex computer dyke
The information (at the beginning of the thread) is not surprising to me. Along the same lines as my overclocked Quadro FX 3700m 65nm outrunning a gtx 280m 55nm at stock.
Because of the way the chips are manufactured and branded, this will always be true. And I'd bet there are qx9100 that will beat the overclocking limit of a few 920XM
What would interest me more is an entire architecture that overclocks so badly that the older architecture will beat it when both are pushed to the limit.
I'd also bet there gtx 280m that wont run at 600/1500mhz which makes me happy -
cookinwitdiesel Retired Bencher
And for the HT tests, I will do some tonight for you on my desktop. I will do 2 real cores vs 1 real 1 HT and show you the differences. I will also play with affinity to do a single threaded app (SuperPi) on both a real and a HT core. -
A processor with hyperthreading duplicates its transistors that hold the architecture state, what shows up as two processors to the OS, each with their own interrupt controller. These two logical cores are identical and are equally tied into and share all other processor resources. The two logical cores share those processor resources equally. If both logical cores are requesting the same resources, they are going to share. One is not the superior that can use as many execution resources as it wants at all times and the other is not inferior that only uses execution units that are not being used by the other. They intelligently share the load so that if you run the same thing on both, they will take just as long to do the work, not one finish war faster than the other.
If what you say is true, do you have any idea how much more difficult it would be to effectively manage and balance workloads than it already is? -
-
At 3.33GHz (stable):
At 3.49GHz (not so stable):
At 3.63GHz (insane):
-
link please. -
so theoretically, if a c2q and an i7 xm could hit the same max ghz and the cpu's were pushed to 100%, they would perform the same? - since the i7 would not have enough headroom to enable hyperthreading if it operated at 100%?
-
damn 5482741!
those are some great numbers there! -
Also 100% usage does not mean 100% usage. The usage % you see in the task manager is not taking into account the actual usage of the execution units. That is where hyperthreading comes in, to take advantage of, or at least try to, of any processing power that would otherwise not be able to be used. -
This would mean that in the best case, an i-core like, say, the i7-820QM would run 15%-30% faster through a well-threaded program than a (theoretical) i7-820QM that had no hyper-threading (or possibly an i7-820QM with hyper-threading turned off). It's still not as good as an (theoretical) actual 8-core processor (without hyper-threading) running at i7-820QM speeds, (which would be more like 75%-90% faster, although at the cost of a probably close to double die size and all the heat and problems that entails), but it's an easier design upgrade than an all-out full core.
Remember, too, that except for the simplest continuous threads, there will usually be "breaks" in the thread where the thread is stalled for various reasons, be it the result of an I/O read, waiting on the result of another thread, user input, etc. It's these times when the processor is thus not being utilized that the other thread is run.
This, by the way, also explains the a lot of the practical performance differences between the Arrandales and the Clarksfields; for most programs that are not very well multi-threaded (this is not quite the same as the number of cores they use), the Arrandales will obviously win out with their faster speeds. But when you stress them, you find out that the hyper-threading is not as good as "real", even if slower, cores. Here are some real-world numbers from an 8740w owner that tested an i7-620M and an i7-720M in the same machine. Note how the i7-720QM is more than twice as fast when it comes to the renders and exports. -
-
-
-
cookinwitdiesel Retired Bencher
People have yet to hardware mod more voltage to an i7 cpu in a laptop. And as for the TDP remark, technically you are right, I meant thermal load as Judicator mentioned. The main point though was that in raising the TDP using ThrottleStop you do the same thing as raising the Voltage, you are altering the power thresholds of the chip as laid down by the system bios.
-
To try and make things a little clearer my understanding has been that "TDP is defined as the worst-case power dissipated by the processor while executing publicly available software under normal operating conditions, at nominal voltages that meet the load line specifications" or something along those lines.
In other words the manufacturers should design around that specification, the key word being the "D" in T DP, design and by doing so everything should work as expected as long as the CPU is operated within it's design limits.
A CPU may be designed with a TDP of 35W, just because you run it at less than that or overclock and exceed it, doesn't change the fact the TDP is still 35W.
As for throttlestop, you are not changing the TDP but the limits at which turbo throttling occurs. For instance a CPU with a 55W TDP and programmable TDP/TDC limits could have the limits increased so turbo throttling didn't occur at 55W but at 65W. The TDP still is 55W however as that is the designed specification. In the same way if Vcore is specified as 1.5V max and you run it at 1.8V it doesn't mean the Vcore specification changed, just means you exceeded it. -
cookinwitdiesel Retired Bencher
TDP is the most power that the chip will possibly consume while under the Intel specified operation parameters (voltage). It is for this number that a heatsink in a laptop is typically designed. With ThrottleStop, when you raise the TDP setting, you are allowing the chip to operate at a higher power level than the Intel specified TDP
If you do not believe me, get a kill-a-watt and look at the difference in power consumption with and without ThrottleStop. People have measured as high as 120W being consumed by the CPU alone when using ThrottleStop. -
lets add another intel video for turbo/hyper threading
Intel® Processor Technologies
because it seems..according to them...it does work as the previous video posted.
regarding tdp/tdc/temp
ps:
i have had mine as high as 215 watts on that meter
when running the vantage cpu test 1
or as low as 1x..in this shot...3x with ram running at 25 mhz..haha
-
-
that's why amd chips burned up so fast. they use to give the worse case scenario...as far as i could tell...lol (back in the day)(they are far better now)
slightest bit of over heating and that was it. chip burned up and you had to go get another.
but enough about this tdp/tdc...if you dont have a chip capable of finding out...no sense in arguing it out. those of us with them..already know what the end result is... -
-
but if you were a bencher...then you would know already. -
-
i work on computers for a living...i highly doubt it.
been doing this since 1992 -
moral hazard Notebook Nobel Laureate
Google Translate
I'm sure someone has done the same with an i7, not everyone posts everything they do on the net (and in english). -
frolly did an i7 on a gx740
but it may be irrelevant when you run a 920...
he would run into the same problems as i am now.
101 error...needs voltage
http://www.hwbot.org/community/submission/1054246_follyman_cpu_z_core_i7_q720m_3702.4_mhz
SEARCH RESULTS:
CPU-Z - 3702.4 mhz - follyman (no team) - (Intel Core i7 Q720M @3702.4MHz)
2 points (2)
CPU-Z - 3518.67 mhz - johnksss (EVGA Enthusiasts) - (Intel Core i7 Q720M @3518.7MHz)
1.5 points
CPU-Z - 3406.38 mhz - DR650SE (EVGA Enthusiasts) - (Intel Core i7 Q720M @3406.4MHz)
1 points
CPU-Z - 3378.25 mhz - ty_ger07 (EVGA Enthusiasts) - (Intel Core i7 Q720M @3378.2MHz)
0.8 points (1)
QX9100 outruns 920XM
Discussion in 'Hardware Components and Aftermarket Upgrades' started by Raidriar, Oct 27, 2010.