Can someone explain to me how hyperthreading works and what the clock multiplier means, I know it means a multiple of a certain clock like 12x21.3Mhz which get you to your max clock shown when you buy your processor but why do processors work that way, why do CPU manufacturers (I.E Intel and AMD) lock their multipliers?
-
three of those four questions are pretty loaded. to start: hyper-threading, CPU multiplier, front side bus, CPU locking, and here's a decent post from tom's hardware's forum explaining a few of the reasons for locking the multiplier. after reading that, you might have some more specific questions.
-
So the CPU multiplier is not actually multiplying the clocks, it just represents a ratio? I don't understand hyperthreading at all, I guess I'm too young (15) to understand all this... I do have more knowledge of PC's then most people my age I know so I guess that's good
-
Just to make sure that there is no confusion, since the first generation core i, Intel ditched the Northbridge + Southbridge design. The incorporated the memory controller into the CPU, the PCI-E lanes for the graphics too at least in Sandy Bridge and Ivy Bridge and integrated the remaining Northbridge and Southbridge functions into a single chip called the Platform Controller Hub (PCH). You can find block diagrams for ivy bridge here: AnandTech - The Intel Ivy Bridge (Core i7 3770K) Review. QM77 chipset diagram here: http://www.intel.com/content/www/us/en/chipsets/performance-chipsets/mobile-chipset-qm77.html
As for locking the multipliers, i would say they did it for monetary reasons (why buy a higher end CPU if you can OC a lower end model and also they can sell CPUs with unlocked multis at a premium) as well as to prevent people from frying their chips by messing too much with them.
EDIT: Much more detailed explanation of Hyperthreading: http://www.makeuseof.com/tag/hyperthreading-technology-explained/ -
What the fudge did you just say, I honestly understood a mediocre amount of the technical talk
EDIT: I didn't fully read it the first time, I understand most now -
-
Also, i added a link to a more detailed article about hyperthreading in my previous post.
I stumbled upon a nice article that explained cpu multipliers among other thing, i'll try to find it again. -
Karamazovmm Overthinking? Always!
1) trash
2) viable
3) good
4) exceptional
In the case of cpu manufacturing there is something called binning, and that means that higher quality chips are destined for higher price ranges.
For example a i7 3610qm is the same chip of the i7 3720qm, only that the former has a lower binning and wont support efficiently the clocks or other features, so to avoid just throwing the chips away, intel sells those for less money and less features enabled.
What you have to think of the cpu is that its just a great calculator, so when evaluating things like computational power you are focusing on floating point operations
Im going to look for a basic presentation that I did based on SB cpus and try to post here, its old more than 1 year and did it for the uni. I dont remember what is in the slides but I can complement if there is any more questions -
-
This is getting a bit dated, but still covers the basics, do not that some thing have changed since 2005 of course: http://www.hardwaresecrets.com/article/How-a-CPU-Works/209. One noteworthy change that i mentioned earlier is the fact that the memory controller is now integrated on Intel CPUs. Hardwaresecrets has a few interesting articles, some rather technical, some easy to understand and others in between. The one on the chipset (you'll get to know why a single chip is called a chipset) is also interesting as well as the one on RAM timings, just be sure to look at what date the article was posted.
-
EDIT: Just read it and it really explains it in a much simpler way than Wikipedia, thanks for posting that article -
). question for you too, tijo--where is the PCH physically located? it's probably a silly question but is it a separate chip or does it exist throughout the board (like as multiple "chips")?
-
It is a separate chip, it's location depends on the board design. It is usually covered by some sort of heatsink, it doesn't necessarily require active cooling, but in some notebooks, it is covered by the same heatsink as the CPU, in others it's not and has some other method for cooling it.
-
-
Here is a pic of the PCH on my G73JH, it is passively cooled by the palm rest assembly (there is a thermal pad making the contact between the metal part of the palm rest and the PCH).
EDIT: as you can see, the PCH is a single chip, it's called a chipset because back in the early days the functions now managed by the PCH were managed by multiple chips (or a set of chips). The functions were then integrated into two chips (Northbridge and Southbridge) and eventually merged into a single chip when the memory controller was moved to the CPU. For the Core 2 and some other Intel CPUs, the Northbridge was called the memory controller hub (MCH) and the Southbridge the I/O controller hub (ICH). -
^nice
That board still have the same components - or the same path as before, though. Just that they're on the same chip.
Not sure what's controversial about hyper-threading, by the way.
Essentially, it works like this. You ask a program to.. add 1+1. So the program translates that to machine-code.. something like..
mov ab, 1 ;sets ab to 1
mov ac, 1 ;.. ac to 1
add ab, ac ; adds ac to ab, result stored in ab.
And.. that probably wouldn't actually work. And it wouldn't look like that when translated by an OS. But in principle you would do three main operations like this from the top of the hardware layer. The two puts are essentially not reducible. But the add function might in turn be split up in several steps by the microcode on the processor. And when that happens, it's possible that:
1. The scheduler is waiting for something to complete.
2. There are free resources on the processor.
And in that case, it might be possible to place two program routines on the same processor. And execute them at the same time, since they didn't actually use the same registers or resources anyway. And it would ideally end up being able to run two programs as if both of them thought they had exclusive access to the processor.
In practice that doesn't actually happen all that often, though. And in reality the typical use for it is if a thread in a program wants to read or write a file, and then stalls while IO happens. Then the processor will be free to do something else. And another thread can be scheduled on that processor while that thread is essentially idle, waiting for IO to complete. From a certain point of view, this is extremely hairy and prone to all kinds of problems. And has of course low probability of striking home outside synthetic benchmarks.. But there is an effect in practical execution as well, of course. The question is how much extra microcode is prepared, and if this requires faster/hotter hardware, that sort of thing.
Arguably, a more efficient way to do things would be to strictly schedule an OS instead. And require programs to declare on beforehand what resources they would need, and what sort of response they would expect from the different routines. Then you would be able to schedule things a lot more tightly, and run parallel operations as a rule instead of as an exception. Potentially being able to execute a lot of tasks concurrently, while not losing response - even at much lower clock-speeds. But it would mean a bit of discipline on the programming, and care with memory and IO operations. Along with actually learning people to program from the bottom up.
So in that context, the Hyperthreading is basically transparent parallelism, or parallelism that happens without the OS or a program controlling it. As opposed to explicit parallelism, where you would program the routines concurrently yourself or via an interpreter, and schedule tasks programmatically. Which we really don't have any hardware that truly would take advantage of anyway at the moment. So as things are, transparent parallelism is where it's at. But as processors become cheaper, we can have more cores, and modules and bus-speeds become higher, ram will be able to do concurrent reads and writes to different processing elements... While also clock-speeds hit the ceiling, preventing us from executing single threads quicker than before, meaning that computers just won't go any faster.. then it's with explicit parallelism where the advancements are made.
This will be a new paradigm, though.. sort of.. And there's basically a requirement for microprocessor makers right now to offer linear thread performance for an extremely simplistic scheduler. Even see that with mobile phones, that there is more and more "demand" to make something that has extremely high single-threaded performance (so you can burn your battery in a few minutes), but no demand for massively higher number of concurrently running operations at for example a set effect-draw. Via either longer instruction sets, or more cores.. This has to do with.. OSes and design choices, mainly. -
-
Karamazovmm Overthinking? Always!
you have to hands right? close both, while the air that is ''trapped'' on both hands is similar, it doesnt mean they are equal.
thats the thing. The silicon substrate might not be uniform, or there is an error when they are printing, and the list goes on, small variations leads to different products -
Though Intel/AMD probably get pretty good yields. iirc, most of the low end Intel chips (desktop celerons/low end i5s) are artificially cut down i3/i7s instead of actually having lower performing/defective parts, since if they relied only on "bad" chips, they wouldn't have enough of them to sell. -
-
-
-
(And, if we're talking about special features, my -2720QM trumps your -3160QM) -
-
-
)
A bit like with the 640LE, 640M, 650m, 660m cards Nvidia have. They're actually made over the same chip. And I'm sure someone is going to insist that they've been selected and so on according to performance and yield and whatnot. But all of those chips have to perform over a certain level to at all work. Same for the ram they use - it has to be the same speed (which again is cheaper to do than have fifteen different designs being manufactured). Which means that all of those cards actually perform closer to each other than the specifications on the vbios allows (and suggests). And that's why we're seeing a pretty much 100% success-rate for people who overclock the 640LE cards to 650 and 660 frequencies.
Happens once in a while. AMD capitalized on it for a while with the K6 and K7 processors, for example. By shipping them unlocked, to allow people to overclock until the chips would fry. Asus had mainboards with bioses that allowed you to mess around with it, and advertised with that. And you had some insane percentage overclocks on just air-cooling that haven't been copied since then. Athlon XP chips raised from 1.2Ghz to 2.8Ghz, that sort of thing.
Now the chips are more similar, though. Simply because like I said the performance on the chips have to be inside a level that is fairly high in the first place. And the process is more complex and much better than then. So you don't have those huge gaps any more. The architectures tend to hit the target they're designed for (at different yields - they're either usable or they're not), and when a new layout/design turns up it's usually bus-changes or optimisations.. architecture changes.. that matter more than processor speeds. -
There is one point that needs to be mentioned here. Hyperthreading is a two way road. Just because a system has HT does not mean everything you run on it will make full use of the technology. That is one of the reasons the Core 2 Quad did not become that big. Software needs to be written to make use of multithreading. At the time of the Quad, few softwares were witten to run on four cores. When you want to run on multiple cores, the complexity in designing software also increases severalfold. There are still several softwares that use only two threads. Many of the more common ones, like newer games, Office Suite and Windows itself makes full use of multithreading.
-
I won't discount a CPU based solely on name/number alone. -
-
.
But I find driving more fun than gaming now-a-days. When I was your age, I was glued to the PS2, and I still game on my Xbox 360 a bit (mainly MW3 these days, with a bit of Ace Combat 6). -
Karamazovmm Overthinking? Always!
Shogun 2 CPU Test Sandy Bridge vs Ivy Bridge - Total War Center Forums
thats a thread on how much the cpu matters on the minimum clocks. Basically its advisable to get a xm cpu, overclock it to 4.5-5ghz and overclock that 680m to however high it can, and it will handle ultra at 1080p, Im not confident that battles that have 20-30k troops would be buttery smooth though, you need more horsepower for that. -
Or take a look at GW2, another insane CPU hog. For now, it doesn't really matter for most games though.
-
Karamazovmm Overthinking? Always!
And btw I have found the presentation, its corrupted, I would have the trouble to translate from portuguese to english but with 70% of the content lost, I threw it out. I had a major breakdown of my NAS late friday, spent the whole weekend trying to solve that, got most of the things back though thankfully. Thats a first on how a raid 5 can fail so miserably.
-
BTW, 100th post -
Karamazovmm Overthinking? Always!
as I said wiki is good for basic info
RAID - Wikipedia, the free encyclopedia
RAID 5 (block-level striping with distributed parity) distributes parity along with the data and requires all drives but one to be present to operate; the array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. However, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced and the associated data rebuilt. Additionally, there is the potentially disastrous RAID 5 write hole. RAID 5 requires at least three disks.
RAID 5 write hole - Wikipedia, the free encyclopedia
this is what happened, it reminds me to get another type of raid -
-
Karamazovmm Overthinking? Always!
-
There seems to not be a real good understanding of how hyperthreading works. It does not increase processing power of the CPU. The processing power of the CPU remains the same whether you have hyperthreading enabled or disabled. All hyperthreading does is to help come closer to the theoretical maximum processing power of the CPU.
There are the parts of the CPU core that do the computations. But these are never used 100% efficiently. When your computer says 100% load, it does not mean that everything is being utilized to its fullest. Normally, every clock cycle only one thread may be executed on the execution unit of the core, but multiple instructions can be executed at the same time from that one thread, to try to use as many of the execution units as possible. Many times there are long lapses when the core is not doing anything while it is is waiting for data from cache or memory, so many clock cycles can be wasted. Also it is unlikely that any single thread will be able to use all of the execution units available at any one time. Where hyperthreading comes in is that it allows two threads to execute instructions on the core at the same time. This does not mean a theoretical doubling of processing power! Processing power remains the same. It just reduces inefficiency in the core and helps utilize otherwise unused execution units. In practice, performance improves by negative 5% up to a limit of around 30% depending on the scenario. -
-
-
I really, really used to love overclocking. I was very passionate about it and was buying, selling, and swapping computer components ALL the time. But eventually there was no more sport in it, and it was just an exercise in spending the most money on hardware.
-
-
-
-
-
-
Karamazovmm Overthinking? Always!
and I use a NAS at home, have to give away mine to my father and built another one, and that last new, spanking new, failed catastrophically.
Im going to return all the HDDs and get new ones. There were 16tb of storage in the new one. How many drives do I have? the drives are of 4tb capacity, hitachi travelstar type -
-
So the best backup today is tape.
Or, if you **really, really, really** need to make sure your data survives for a long, long time, you need to look at the Egyptians: stone, and lots of it! -
Question about CPU's
Discussion in 'Hardware Components and Aftermarket Upgrades' started by VaultBoy!, Oct 7, 2012.